At the moment we are trying to set out a plan for a range of development strands. The level building, multiplayer, monsters, Blessings, and Curses, multiplayer and of course there is sound and music. What we have also thought about is that there is the Mad Professor in your ear giving you updates and cryptic advice.
For the moment we are using a synthetic voice to give us the flexibility to change dialogue quickly and that means we can also react to events more than we could with a traditional voice over.
As we are a tiny game studio, costs must be tightly monitored. We would love to cast voice actors and we want to support the actor’s industry. However, we need to also think about where our money goes. Using AI for game interaction means we can achieve a very immersive sound experience. We can also use voice cloning techniques.
This will give us the flexibility to use voice talent in the future. For now we have created the Mad Professor’s voice using a game focused AI system. This provides an API for dynamic voice generation, however at the moment we are simply using it to generate standard audio file generated files we then import into our development environment.