Sense, Process, Connect, Affect
What constitutes an Internet Connected Thing? Aside from the technology aspects, what are the fundamental concepts that an IOT object needs to address? This thought process led me to develop the ‘Sense, Process, Connect, Affect’ grid, that tries to capture the technical, meaning and importantly, the story aspects of IOT objects:
Why a Tuning Fork?
To test the validity of the grid, I needed an examplar project. It needed to be relatively simple (in theory) but cover all the aspects in the grid (with the story aspect being this developmental journey). After considering a few options, and talking to various makers, craftspeople and artists, one particular project seemed viable. A musician, who also taught music online during the pandemic, asked if its possible to create a device to see if the students instruments were in tune, or more importantly in tune with each other. This provided a techical ‘sense’ and ‘process’ aspect (recognising a note from a sound input), and a ‘connection’ and ‘affect’ aspect (students would try to be in tune with each other). Using the metaphore of the tuning fork provided a familiar interaction for users.
It was also an oppertunity to test the idea of phases of work, that is that there are design (planning and constraining pre-situ), craft (trying and experimenting in situ) and engineering (testing and measuring post-situ). The idea is that craft projects spend more time in the craft phase (working with the ‘materials’, trying things, experimenting), but do not entirely omit the design phase (establishing a vision rather than a plan) or the engineering phase (experiments need results, objects need testing for suitability etc.) In particular, a craft focused project has multiple valid outcomes, (happy accidents) wheras a design-dominant project would aim for a single goal, and an engineering-dominant project would strive for results within a certain tolerance.
Vision (v1.0) ‘Designing’
- A Tuning Fork device that recognises a note.
- This note is compared with other users notes.
- A result of ‘in tune’ or not is reported (with some indication of how to remedy, e.g. user one is too high etc.)
The technical details of the development processes are outined in the instructions tab above. Here, I would like to talk about how working with the materials, in this case electronics components, code, wood and metal, allowed for experimentation and ultimately a change in the vision for the project as a whole.
Firstly, assembling the electronics was initially relatively simple, as the Tuning Fork only utilised one sensor (a microphone) and a processor (arduino). There interaction involved litlle more than monitoring a single input, recording that number over a period of time to produce a waveform. Some experimentation was required to record at the correct rate for example, but this was largely already documented by previous developers.
The housing for the Tuning Fork was developed using wood (for the handle) with a metal prong. This allowed me to dust off some old making skills i hadnt used in a while, and I think the housing will probably get a few more iterations on contact with users as i’t’s a bit rough and ready.
The first challenge came when trying to identify a note. Several previous developers had published code on this, but the code is not straight forward. As an experiment, I decided to pit these algorithms against each other. The process was to record a few seconds of sound from a known source, put it through the algorithms and see which one was most acurate. This needed a bit of engineering.
Identifying a Note ‘Engineering’
- Generating a known note. To do this, I used a frequency generator app on my phone.
- Recording a sample. This was already part of the code, but now I would be using the same sample across three candidate algorithms, so it was stored in an array.
- Testing each algorithm. I then let each of the algorithms loose on the recorded sound, and they output a calculated note/frequency.
..were disapointing. None of the algorithms were reliable. On further reading, it turns out that this is actually a hard problem; one that requires more complexity than an arduino could process in real time – so do we need to process the sound on the cloud using more powerful processing? If this were a design-led (or engineering-led) process this might be the logical next step. However, as this is a craft-led project, my first instict is to re-examine the visioni of the project.
Rethinking the Vision
Firstly, what do I want to keep from the process? I think the Tuning Fork metaphore is fundamental, and the concept of connecting to others through sound is the underpinning story. In this case, can we rethink connection, can it be broader than an online classroom and instead connect across the population, in the way a social media platform might. In other words, can the Tuning Fork detect who in the population you are in harmony with. Harmony, unlike notes, is not an equality. It means that two things (sounds) go together. It is possible therefore to be in harmony with a much broader group of people than it is to be in tune with them.
Vision (v2.0) Designing
- A Tuning Fork that records sound.
- The sound is used to acertain who (people, nature etc.) you are in harmony with (i.e. a group selection)
- This is presented on a social media platform – a ‘harmony board’.
Rather than focusing on detecting Harmonics (which would be as complex as detecting notes), I wanted to focus on the concept of being in harmony with others. To this end, I decided to simpley ‘split’ the incoming sound into 8 harmony groups, and instead concentrate on the output – the ‘scial media platform’ Harmony Board. This simply takes the calculated harmony group and displays an animation informing the user which cared-for people (i.e. family and friends chosen by the user) they are currently in harmony with, what proportion of the population (or various populations e.g. older women) they are in harmony with, and likewise for nature (song birds, oceans etc.) or even celebraties.
Is evaluation an engineering phase? It can be, if the goals of the project are to have an output that is measurable. In this case, the goal was to demonstrate a ‘proof of concept’ for the sense, process, connect, affect model oulined above, and in doing so to produce an IOT object that allows the connecting to others through sound. For the former, I think this is a good example of what can be done in a few months, but there is always room for improvement in the model as more makers attempt to create their works during the project. As for the latter, I think the concept is sound (pardon the pun), but sucess would occur when using this with real users. Hopefully, a Tuning Fork Version 2 will be developed during the rest of the project. I’m already think about how I might rethink the Vision (are craft-led projects ever really finished?)