Music production with artificial intelligence, platforms operating in this field and a look at the impact of artificial intelligence on the music industry.
There are certainly some of you familiar with David Byrne, the mastermind of the US band Talking Heads. David Byrne is a musician and writer who also puts his musical life into words. In his book titled “How Does Music Work”, published under the label of Mundi Kitap in our country, he also touches on the evolutionary development of music and even the intersections with technology. In this book, Byrne says that the digitization of phonetics, and immediately after, all kinds of information, is largely done by a telephone company.
Bell Laboratories, a division of the Bell Telephone Company, was instructed to find a more efficient and reliable way to transmit conversations.
In 1962, Bell Laboratories discovered how to digitize sound—originally sampling a sound wave and breaking it up into smaller segments that could be represented by ones and zeros.
As we can see from here, the digitization of sound arose out of a completely different need. In fact, this is one of the most beautiful aspects of technology; reaching for something else while looking for something. From records to cassettes, from cassettes to CDs and MP3s, there is now a digital music world. The establishment, dissemination, adoption and scrapping of Spotify and subsequently other platforms is one of the most notable examples of this.
Yes, we entered a different era in music with digitalization. However, a very different era awaits us now. The most important technology of this period is “artificial intelligence”, which was once only a science fiction term.
How to create music with artificial intelligence?
First of all, let’s try to talk about how to create music with artificial intelligence in a non-technical way. In fact, artificial intelligence’s production of visuals and music production proceeds on the same logic. Machine learning models are used as inputs in the form of numerical vectors that represent the input we want to provide to the model in a comprehensible way. In other words, in order to produce music with machine learning, therefore artificial intelligence, it is necessary to convert music into digital form. For this, the first thing to do is to think of the melody as a string of numerical markers. Meanwhile, it should be noted that each vector has some knowledge of note, rhythm, and timbre, among other properties that can be represented.
MIDI files, which are structured files that provide sequential information such as notes, changes in rhythm, BPM, can be used to train patterns to be discovered. Most algorithms with raw audio use a raw representation of the audio at each time step. With input sequences as input vectors, often models are trained with natural language processing i.e. NLP, which has the task of predicting the next token of a sequence at each time step.
The most important thing to emphasize at this point is: Platforms that produce music with artificial intelligence do not have to use exactly the same system. Some platforms use transformers, which are described as an architecture for neural networks that contain a set of special layers called attention layers, while others may use some configuration of neural network architectures to change the pitch of a MIDI file. As a result, the most important point here is the digitization and training of the voice. Of course, there is a point that I have to underline for myself, because I approach this as an area of interest, there are definitely places that I missed or couldn’t make sense of. I may even be misinterpreting some points. But if you’re not a developer like me, it doesn’t really matter anyway.
Music creation platforms with artificial intelligence
Let’s not go without mentioning a few of the most frequently used platforms in this field.
Amper Music is one of the easiest to use artificial intelligence music generators. Ampere doesn’t require deep knowledge of music theory or composition to use, as it creates music tracks from pre-recorded samples. AIVA is another AI-powered music creator. With this platform, it is possible to compose music for commercials, video games, movies and more. With AIVA, music of many genres and styles can be easily created by choosing a preset style. Ecrett Music, on the other hand, allows anyone to create music clips by training on hundreds of hours of existing songs. The platform with a clear interface can be used by both amateurs and professionals.
Soundraw lets you customize the song with phrases generated by artificial intelligence. The tool is a combination of artificial intelligence and its collection of manual tools that allow you to create and customize new music with ease. Boomy lets you create original songs in seconds. Since the songs are unique, it becomes possible to generate income from these songs later. OpenAI has its own online AI music generator called MuseNet . The tool can produce songs with up to ten different instruments and up to 15 different styles of music. You can even browse these musics on OpenAI’s Soundcloud account . Amadeus CodeOn the other hand, it serves as an artificial intelligence engine that includes the chord progressions of some of the world’s most famous songs. It is then possible to use them to create new musical composition structures.
Although these are basically among the most preferred platforms, the number of vehicles serving in this field is incredible and it is not possible to include them all in one article. We frequently feature and will continue to offer artificial intelligence and music creation tools on Webrazzi, which has increased with the demand for artificial intelligence.
Basically what the user does on these platforms is very simple. You enter a platform, become a member if necessary, choose the type of music you want, what the music will be used for, the mode of the song or sound to be created (sad, happy, angry, etc.). Then the most suitable one for you from the already trained voices is selected and put together. Let’s say you don’t like the instruments in the resulting sound, you can add and remove instruments over the same sound, decrease or increase the speed, and try various things until you reach the desired result. I don’t know if the result will make you as happy as music composed by a real person. However, the advantage here is that the created “song” can be changed on the fly, or let’s go, the song can be created “on the fly” while at the same time being “personalized”.
According to Adorno, art is based on the individual’s own subjective creativity.
At this point, the most important question in mind is: “Can artificial intelligence take the job of a musician?” My personal opinion is that such a thing will never happen. Önder Kulak ‘s doctoral dissertation titled “Theodor Adorno: Culture in the Claws of the Culture Industry” has the following statement; “Adorno points out that the relationship between the instrument and the artist is extremely important for the value of a work. For example, for a work in which instruments have begun to direct the artists, one cannot talk about artistic value, nor can it be expected more than being a kitsch. is based on “subjective creativity”.
This statement says exactly what I mean. Artistic value vanishes if instruments rule people, because art relies on the individual’s own subjective creativity. For this reason, in my opinion, music produced with artificial intelligence can be a consumer product rather than touching the soul of the person. What are these consumer products? It could be the soundtrack of a social media video, it could be music that could be used in an advertisement. In short, it is more likely that a work will emerge with artificial intelligence that will not last long and people will not be able to say, “Oh, what is this song, let me listen to it right away”. But here, too, “popular culture” comes into play. Recently popular music genres are generally suitable for computer creation. This is especially true for nonverbal content. So Adorno and I could both be wrong. Indeed, time will tell.
Finally, I would like to express that artificial intelligence will be beneficial as long as it is with the human, not in front of the human. This idea also applies to the music industry. In other words, it will be easier for musicians and artists to benefit from technology instead of rejecting it. Maybe we will be listening to songs that we will not think of in the future in formats we never expected, thanks to the combination of the human brain and artificial intelligence, who knows?