AI and Visuals

We tried training a model with facial recognition using Wekinator and explored max8 and touch designer to have interactive dynamic visual feedback. 

Reflecting, I really loved working with face sensors to identify the biases and different ways in which the face reacted and the angles in which it was capturing the different data that I was trying to train it with. And it was fun to be able to link sensorial data and trained model data to a visual processor in max 8 and see data in different ways. It was a path filled with mistakes and wrong connections hopefully that will reduce as I use it more. I took the concepts and practiced creating dynamic visuals through linking code and I also tried to work with TouchDesigner and learn how to use it.

I am grateful to learn the basics of visual code and training a model with various spatial vectors to create data sets. It was nice to use myself only for the training data and observe how it changes for my friends with different shades of skin and how it understands my face and my skin to observe biases based on different colors of skin. There was of course a very high bias obviously since the training data was limited to about 30 pictures of me and so it cannot be helped.

I will be using visual processing in my future projects as it creates an impactful and easy understanding of all the data that is being curated and created by the body and any data that needs to be shown in captivating methods or to have dynamic relations in which there are changes expressed by movement, sound, and things like that. So I really, really like learning about these softwares and I will definitely take advantage of this in the AR and VR experiences that I create as well.