I think it’s important to remember how this used to happen.
AT&T paid voice actors to record phoneme groups in the 90s/2000s and have been using those recordings to train voice models for decades now. There are about a dozen AT&T voices we’re all super familiar with because they’re on all those IVR/PBX replacement systems we talk to instead of humans now.
The AT&T voice actors were paid for their time, and not offered royalties but they were told that their voices would be used to generate synthentic computer voices.
This was a consensual exchange of work, not super great long term as there’s no royalties or anything and it’s really just a “work for hire” that turns into a product… but that aside – the people involved all agreed to what they were doing and what their work would be used for.
The ultimate problem at the root of all the generative tools is ultimately one of consent. We don’t permit the arbitrary copying of things that are perceived to be owned by people, nor do we think it’s appropriate to do things without people’s consent with their “Image, likeness, voice, or written works.”
Artists tell politicians to stop using their music all the time etc. But ultimately until we really get a ruling on what constitutes “derivative” works nothing will happen. An AI is effectively the derivative work of all the content that makes up the vectors that represents it so it seems a no brainer, but because it’s radio on the internet we’re not supposed to be mad at Napster for building it’s whole business on breaking the law.
I don’t think permits and concent alone can be used in labor relationship, because the unbalance position of power employees and employers have with each other. Could the workers really negotiate better working conditions? They really can’t, not without an union anyway.
I think a more interesting (and less dubious) example of this would be Vocaloid and to a greater extent, cevio AI
Vocaloid is a synth bank where instead of the notes being musical instruments, they’re phonemes which have been recorded and then packaged into a product which you pay for, which means royalties are involved (I think there might also be a thing with royalties for big performances and whatnot?) Cevio AI takes this a step further by using AI to better smooth together the phonemes and make pitching sound more natural (or not - it’s an instrument, you can break it in interesting ways if you try hard enough). And obviously, they consented to that specific thing and get paid for it. They gave Yamaha/Sony/the general public a specific character voice and permission to use that specific voice.
(There’s a FOSS voicebanks but that adds a different layer of complication to things like I think a lot of them were recorded before the idea of an “AI bank” was even a possibility. And like, while a paid voice bank is a proprietary thing, the open source alternatives are literally just a big file of .WAVs so it’s much easier to go outside their intended purposes)
I think it’s important to remember how this used to happen.
AT&T paid voice actors to record phoneme groups in the 90s/2000s and have been using those recordings to train voice models for decades now. There are about a dozen AT&T voices we’re all super familiar with because they’re on all those IVR/PBX replacement systems we talk to instead of humans now.
The AT&T voice actors were paid for their time, and not offered royalties but they were told that their voices would be used to generate synthentic computer voices.
This was a consensual exchange of work, not super great long term as there’s no royalties or anything and it’s really just a “work for hire” that turns into a product… but that aside – the people involved all agreed to what they were doing and what their work would be used for.
The ultimate problem at the root of all the generative tools is ultimately one of consent. We don’t permit the arbitrary copying of things that are perceived to be owned by people, nor do we think it’s appropriate to do things without people’s consent with their “Image, likeness, voice, or written works.”
Artists tell politicians to stop using their music all the time etc. But ultimately until we really get a ruling on what constitutes “derivative” works nothing will happen. An AI is effectively the derivative work of all the content that makes up the vectors that represents it so it seems a no brainer, but because it’s radio on the internet we’re not supposed to be mad at Napster for building it’s whole business on breaking the law.
I don’t think permits and concent alone can be used in labor relationship, because the unbalance position of power employees and employers have with each other. Could the workers really negotiate better working conditions? They really can’t, not without an union anyway.
I think a more interesting (and less dubious) example of this would be Vocaloid and to a greater extent, cevio AI
Vocaloid is a synth bank where instead of the notes being musical instruments, they’re phonemes which have been recorded and then packaged into a product which you pay for, which means royalties are involved (I think there might also be a thing with royalties for big performances and whatnot?) Cevio AI takes this a step further by using AI to better smooth together the phonemes and make pitching sound more natural (or not - it’s an instrument, you can break it in interesting ways if you try hard enough). And obviously, they consented to that specific thing and get paid for it. They gave Yamaha/Sony/the general public a specific character voice and permission to use that specific voice.
(There’s a FOSS voicebanks but that adds a different layer of complication to things like I think a lot of them were recorded before the idea of an “AI bank” was even a possibility. And like, while a paid voice bank is a proprietary thing, the open source alternatives are literally just a big file of .WAVs so it’s much easier to go outside their intended purposes)