This has been a project of echoes and icons, the echoic and the iconic
but mostly it's been about love.
It turns out Paddington's way of showing Nuckles is an effective utilisation of the way we process information. We can process what we hear in our echoic sensory memory and what we see in our iconic sensory memory and if we focus on these they are moved into our verbal and visual working memory. When learning something new we experience a high cognitive load, and these dual channels make it easier to understand instructions because they exploit our brains ability to process the echoic and iconic, the verbal and visual, in parallel.
This app is designed to support novice cooks by reducing the cognitive load through dual channel communication, displaying only one step at a time, and using movement sparingly to direct attention.
If this app is a cake, I started icing it before it was baked.
Actually one of the things I love about baking is the simple ratios at work. A biscuit is basically a 3:2:1 ratio of flour, butter and sugar whereas a cake, say, is 1:1:1:1 of flour, butter, sugar and egg. Usually a data visualisation piece of work is 4:1, 80% data wrangling and 20% visualisation. For studio 1 I spend the first 50% on data wrangling then the remaining half on visualisation. This studio I started on the visual design and storyboard and ended up down a rabbit hole of icon design leaving little time for the data wrangling.
It was a useful and necessary detour. Studio 2s focus on self-direction and getting lost, has given me a stronger sense of direction for my practice, focussing on data visualisation and generative representations. Realising I'm lost and going back to graphic and visualisation design principles gave me a strong direction for moving forward.
During this learning process I was building schematic mental models for design. This app aims to help learner cooks build and then hook into their own mental models for cooking processes.
Out of this reflection and retrospection, I have a plan for studio 3, if I take this further. A focus on data wrangling that will allow the user to paste a conventionally formatted recipe and record their voice to create a vertical video that can be shared on social media platforms. A focus on visualisation with basic machine learning to make the progress through the recipe more and more convenient (little need to press 'play').