Sound as input
This is something you mentioned earlier. Extending human actions by simulating the echo of actions from one place to another. Would be limited to actions that involve sound, e.g. talking, snapping your fingers etc. This could be very nice if it has a very short release time.
Verbal -> non verbal communication. Speech is transformed into abstract shapes / vibrations. Non-verbal form of communication. Like the two cups on a string sound is converted into vibration. Present an expanded way of communication aided by visuals. The pitch and tone of your voice is analysed. Each sound is presented in its wave format, but eace spline will connect to the one before, creating a generative landscape through sound.
Buoy / wave height data is basically a little slice / plane with a displacement modifier on it. Could be fun to try and use this data and apply it to an "urban slice". Either directly to create a surreal feeling or use the height data and map it to something else, such as time etc. The purpose of this would be to create a connection between two wildly different settings, busy urban setting ≠ ocean.
Alternatively go to a location and record audio. Then extract the 3D data of this location from a 3D map. Then use the sound to displace and distort the digital copy. Sound is vibrations that travels through the air. With tiny refractions. As such this project increases the impact / magnitude of these sounds. It presents us just a more extreme version of what already happens. Schlieren photography.
Sound as output
Create sound from motion. This would be about using realtime data via. a live stream. E.g. there are tons a traffic cams around Seoul which could be used. However it would be better to have a stream that involves more people and less cars. For this it would be about capturing the urban soundscapes. However this would be very similar to the wind project by Damian Stewart. To differenitate the project maybe i should try to analyse the input differently e.g. use the motion itself to create sounds, instead of pixel contrast.
Summed up: Use motion to draw paths, then convert these paths into waveforms / notes.
Water - LED experiment. This one has the same problem as the media art project about creating waves. To get around this, maybe i could make a small stand for the LED light and place it somewhere along the Cheonggyecheon and record data myself. Then use this data to do the experiment. However it wouldn't be real time data anymore. This project would be about capturing a specific moment for people to experience without being there. In a similar way to how photographs gives us a glimpse into the past. This project is about experiencing the past via an audio - visual installation / more immersive.
Alternatively I could try to use buoy / wave height data from a distant location and then use this data to generate sound, but i think this has been done many times before.
I was thinking about how to create forms related to heat. One way would be to go in the direction of a mirage / refraction of light. Or the human thermal boundary layer.
Problem with this model is that it may be too flimsy with just paper. E.g. if add support inside, it will interfere with the rear projection.