Final VR Days – and I found the session block on AI & Virtual Beings fascinating.
If you’ve heard my recent round of talks you’ll know I wasn’t a massive fan of how Google Duplex seemed to purposely add human speech inflections like uh-huh to trick the person on the other end of the line that they were talking to a person. Today it was interesting to see all the panellists agree they think disclose being very important in these situations.
I was particularly interested in Cameron Wilson’s segment on his virtual modelling agency – in particular the story of Shudu – the world’s first virtual supermodel. Starting by creating the character, casting her for a shoot and then finally revealing that she isn’t real and the public feedback on it. There are many virtual humans whom have attained fame, following and influencer status. There was discussion over whether the “reality” should always be revealed, should the person or people behind these accounts be made known etc. In an age of social media where noting is as it seems – the photos are staged, the outfits carefully curated, and the filters turned up to make it look like a picture perfect reality – is that much different to a virtual influencer?
Virtual Assistants and people becoming emotionally invested in characters, stories etc. was also food for thought. It made me think of the movie Her where the main character falls in love with his virtual assistant. While to some that may seem unrealistic – I am brought back to my days of working in a game store when Lara Croft – Tomb Raider was selling like hotcakes and the many customers professed to being in love with her.
It also made me think of Robot and Frank – a companion robot that kept an elderly cat burglar with dementia company – ensuring he was eating correctly, getting the stimulation he needed etc. but also being impervious to his moods etc. With a growing aging population and ever-increasing loneliness I can’t help but think of the good this technology can do. If these assistant robots can help motivate someone to get up, get moving, and feel better about themselves and help fill the void that could outweigh the bad that they can’t emotionally invest back into that person.
It’s the 2nd day of the VR Days conference and we’ve changed venue to this funky warehouse location. I spent the morning listening to sessions on VR training but it’s the set of afternoon sessions on “Use Your Brain 4.0” that got me thinking.
I found the session about BrainVu which ”uses a non-invasive and remote smartphone/AR/VR camera to extract physiological bio-markers that indicate changes in brain activities and deduce human mental responses including: processing Cognitive Load, Stress Level and Emotional Engagement. The calculated human states are then correlated to events and used to connect to a multitude of platforms to provide human machine emotional interaction.”
Before we got into that the idea of subliminal messaging (or messages below the threshold of normal perception) was discussed. In many countries this is banned. I went and double checked an YES Australia is on that list:
preventing the broadcasting of programs that:
(i) simulate news or events in a way that misleads or alarms the audience; or
(ii) depict the actual process of putting a person into a hypnotic state; or
(iii) are designed to induce a hypnotic state in the audience; or
(iv) use or involve the process known as subliminal perception or any other technique that attempts to convey information to the audience by broadcasting messages below or near the threshold of normal awareness;”
There were some interesting points in these sessions around the use of colour and light to enhance memory retention, but also the use of cameras to extract biomarkers and see what the user is thinking etc. As someone with the absolute worst poker face, the ability for my eyes to additionally give away my thoughts set off some alarm bells. I started to think of the scene in Blade Runner looking into the eyes with questions to decide if the person was real or a replicant.
The discussions on eye tracking and what you can tell about a person hit home a bit more when I wandered into the Church of VR where there are large groups of people trying various experiences. I know lots of my data gets tracked every day. A lot of it I knowingly “give” by putting it on the internet, using my credit card etc. but should my true thoughts and feelings be mined – things I can’t turn off or consciously control. Even though I bought a $100 pair of shoes to wear to work, should I give up the fact I was secretly dying to buy the $300 pair (or not so secretly if this eye tracking thing works). There’s something about my inner-most thoughts that I feel should stay just that…inner-most.
Continuing on Day 1 of the Vision and Impact Conference of VR Days Europe and I’m listening to Brandon Harper talk about ambient experience.
In his session he mentions a game the team plays called Judgment Call to look at the ethical implications of the products they’re creating. I hadn’t heard of it before but a quick internet search allowed me to find it here: https://vsdesign.org/publications/pdf/p421-ballard.pdf
I really liked the concept – take a stakeholder, a principal, and a rating number and write a review and discuss it. It’s a simple concept and I’m keen to try this on my next project. It fits in well with a few books I’ve been reading on decision making and strategies lately. I like the way it makes you think about what would you have to do / or what are you doing that makes this stakeholder give you a rating for a particular theme. I think it could be really useful to bring out risks that you hadn’t really thought through before. The example of a 5* rating from a hacker for instance – what can we do so a hacker can’t give us a 5* rating (meaning it’s easy for them to exploit the system) and balance that with reviews we’d get from a standard user wanting ease of access while being secure.
Keen to hear if anyone has used something similar when developing their product features.
Back in March last year, my friend James challenged me to the AI Ethics challenge. While I intended to write a blog post and had lots of ideas I decided I’d fulfil the challenge with a talk instead.
50 Shades of Grey–Ethics In A World of Ever-Growing AI
It is the best of times, and the worst of times. We are in an age where technologists can use their skills to make amazing, positive impacts on people’s lives. With these seemingly boundless possibilities also allows us to slide into a slippery slope where the same technology can be used to harm.
In this session Bronwen will cover areas of technology that can be used to help or hinder and question the increasing use of AI to bring these abilities into the hands of everyday people. She does not hold the right answers to these questions, instead wants to foster open discussion and questions on our responsibility as ethical technologists to help drive the use of technology for good.
First cab off the rank for the Brisbane Azure User Group on the 11th July.
Followed by the Brisbane .Net User Group on the 16th July.
We had some really great discussions around the trolley problem, voice synths etc.
A few people asked about some of the sites I’d link to so here they are.
The book I’ve been reading is
The Age of Surveillance Capitalism
If you’re after an overview of all the different types of ethics (we only discussed 3 of them briefly) this site is really good : https://plato.stanford.edu
If you’re interested in having input or checking out where Australia is at with Ethics: Australia’s AI Ethics Framework
If you wanted to know about Genetics Discrimination in Australian Insurance : Australia: Genetics Discrimination in Insurance