This week I have been totally blind-sided by AI. Although I can see many uses (and misuses) of AI, until recently, I hadn’t found a day-to-day use for myself. However, thinking more broadly about the tasks I do repetitively, I have been hit square in the face by how to deploy AI (at least in the form of ChatGPT) into my daily workings. This got me thinking about the “Technology Adoption Curve”, which I now believe we’re rapidly accelerating over.
To put things in perspective, consider the rate of change in the music industry since the release of Napster in 1999. It disrupted the music industry by making it easy for people to share music without paying. Napster ended up in copyright infringement lawsuits and eventually was shut down in 2001, but its impact on the music industry was significant.
After Napster, we saw the rise of digital music services like iTunes, Spotify, and Apple Music. These services allowed people to legally purchase and stream music online. They also provided valuable data about user preferences, which helped artists and record labels to better target their audience. Think about how you consume music now, then think about how you did it even as late as 2008. The vast difference in just 20-odd years is staggering.
Today, AI is starting to have a similar impact on a range of industries, I had not seen this, as I mentioned, but now I can see how vast this thing “AI” is. In healthcare, AI is used to analyze medical images and diagnose diseases. In finance, AI is detecting fraud and making investment decisions. In transportation, AI is being used to improve safety and efficiency. These are not fast-moving, seat-of-the-pants, disrupter industries.
Although we are past the “early adopters” phase, we are still in the early stages of the AI adoption curve. While there are many promising applications for AI, there are vast challenges to address before we move to wider adoption. The biggest I see (as a data-conscious citizen) is a lack of transparency and interpretability. It can be impossible to understand how an AI system arrived at a particular decision, removing trust and accountability. This must be addressed before we can move on, but I worry that the technology doesn’t allow for this by its very definition. We perhaps shouldn’t be able to understand these things, as then they would not fascinate us.
Another challenge related to data is the need for large amounts of it to train the algorithms. This can be a problem in industries where data is scarce or sensitive, meaning certain aspects of society could get left behind. Finally, there is the issue of bias in AI systems, again leading to the marginalisation of those without the power to wield AI.
There’s so much for us to learn about prompts (which is a new field of Engineering BTW), ethics, law, policy, and more - so exciting, and equally and totally consumingly scary.
To end on a lighter note, I’ve also played with DALL-E and produced some amazing images: