Respeaker’s ethics-first approach to AI voice cloning locks in new funding
Ukrainian synthetic voice startup Respeecher is finding success not only despite bombs raining down on their city, but also a wave of publicity that has pitted them against sometimes controversial competitors. The new $1M in funding will help the company add some studios to its media and gaming clients.
Respicher is perhaps best known for being chosen to mimic the voice of James Earl Jones and his iconic Darth Vader for the Star Wars animated show, then later choosing to portray a young Luke Skywalker for The Mandalorian. But the company has also worked with game developer CD Projekt (of Witcher and Cyberpunk fame) and recently inked a deal with Warner Music to recreate another iconic voice: French singer Edith Piaf.
Unlike text-to-speech engines, ReSpeech uses voice models to modify the speech of actors, who are doing their best to recreate the voice or character in question. In this way it is not merely produced, but is like an artificial voice. They also perform accent modification functions, helping to reduce or enforce unwanted pronunciation.
The ethical questions involved in cloning someone’s voice are obvious, especially someone long dead who cannot meaningfully consent. And some startups and services have simply exited the situation, seeing it as a losing battle in many ways. (Needless to mention that this limits the range of clients.)
Respeaker has made ethics a pillar of its business across its various sectors.
“Consent is obtained from those who have authority; In the case of deceased actors, it could be estate or family,” said CEO and co-founder Alex Serdiuk. “There are many cases when they are very involved in the process, and provide valuable feedback to get the voice right – because such projects are a tribute to their relatives, their contributions, and the characters they created.”
Recently he worked with Callum on a voice-over based on the voice of old Hollywood star Jimmy Stewart.
Permission to earn a living and compensation are worked out from the beginning. Voice actors are beginning to view these voice models as assets to be controlled and monetized rather than (or perhaps in addition to) a threat to their livelihood. Respeecher is building a voice library of actors who have opted into the process, and the company has also joined Adobe’s Content Authenticity Initiative (for what it’s worth).
By not focusing on scaling like crazy during a big year in AI, Respicher may have missed out on some capital or business opportunities. But slow and steady might actually serve them well in this case – and besides, they had a lot going on in Kiev last year.
“Like all Ukrainian businesses and startups, this war taught us what it really means to be resilient,” Serdiuk said. Raising money is never easy, and perhaps it would be easier if Russia did not regularly attack our cities with missiles or Shahid drones. After all this, now, I believe there is hardly any obstacle that our team could not overcome or solution that we could not find.”
However, the company has managed to pioneer a new field of work during these chaotic times: synthetic voices for people who have lost the ability to speak on their own. We have seen other startups and established companies enter this space, which may not be as lucrative or lucrative but can change lives.
“We have many projects for hospitals as well as for patients with ataxia or laryngectomy. We had the opportunity to work with a laryngectomy patient [is] Konrad Zielinski, a PhD student at the University of Warsaw, who lost his voice due to a laryngectomy. Our technology helped them communicate in a more natural way in their own voices,” Serdiuk said. You can read more about Konrad’s case in this blog post.
Respecher announced today that it has raised a $1 million “Pre-Series A” round, led by entrepreneur Gary
Vaynerchuk and funds FFVC Poland, Bad Ideas, ICU and SID Venture Partners contributed.