How can we use stable diffusion to help generate visualization images?
With the recent breakthrough of generative models pre-trained on large datasets, models such as DallĀ·E and Stable Diffusion are able to effortlessly synthesize high-resolution images conditioned on input text prompt. It is therefore interesting to investigate whether we can integrate said tools in the process of designing visualizations. Furthermore, we can take this opportunity to focus on how we, as users, interact with this piece of technology through prompt engineering and hyperparameters tweaking.
For our activity, we will learn to use Stable Diffusion to generate visual components useful for designing visualization prototypes.
We will be following two approches: 1) Text-to-image, where we input prompt to describe the desired image output. 2) Image-to-image, where we input a simple sketch along with text prompt describing the desired evolution from our initial sketch.
As we experiment with these approaches, record and reflect on prompts and ideas for later dicussion.
Schedule
We loosely follow this plan but are open to drop-ins as time permits.
16.00 | Arrival and snacks |
16.05 | Light introduction to Stable Diffusion and setup |
16.20 | Introduction to the activity |
16.30 | Experiment with text-to-image to create visualization prototypes |
17.00 | Check in and informal chat |
17.15 | Draw sketches to be used and experiment with the image-to-image approach |
17.45 | Reflections, wrap-up, and informal chat |
18.00 | Informal chats and snacking |
Expectations
Keeping with the jam narrative, we aim for this event to be fun and engaging, and to allow us to meet fellow people with interests related to human data interaction.
- Have fun
- Bring your own laptop
- Discuss perspectives related to human data interaction and beyond