Visualizing Twitter Feeds
Technologies: Python, Pytorch, Scikit-learn
My team's final project in EE 460J: Data Science Lab aimed to analyze the effects of emotion on AI generated images.
The basic pipeline was as follows: when the user inputs some text (we tested on Twitter data, but any written content would work), we would use VQGAN + CLIP to create a starter image from the tweet's keywords.
We then used CLIP-guided diffusion to edit the image to inject the tweet's sentiment. For example if we had a tweet about a dog, the starter image would be of a dog. Then when we use CLIP-guided diffusion to inject an emotion, such as happiness, we would expect the final image to be of a happy dog. For the project we created a keyword extraction model, a sentiment analysis model, and tuned state of the art generative image models. This was a really interesting project, and you can read more about it in the Medium article below: