The integration of artificial intelligence into our daily lives is accelerating – but AI mustn’t outpace our concern for privacy and cyber security.

The recent situation involving Microsoft’s new AI feature serves as a stark reminder of this necessity – and how businesses can easily score an own goal!

 

Privacy by design – missed!

Microsoft announced a new AI feature designed to enhance user productivity by suggesting actions based on-screen content.

Regular screenshots would be stored locally (irrespective of what was being done) to allow AI to ‘recall’ what was being worked on and advise users.

The implementation raised significant – and understandable – privacy concerns.

This feature, intended to streamline operations, inadvertently became a privacy nightmare.

This case highlights what can go wrong if a business (of any size) forgets to build in privacy and security from the offset in any new product.

In their eagerness to deploy AI, Microsoft scored an own goal by overlooking a fundamental aspect of digital trust – user consent.

Once the horse had bolted…

The backlash was swift and loud, leading Microsoft to modify the service to an opt-in feature.

This change came after the fact and suggests a reactive rather than proactive approach to privacy.

Companies need to embed privacy considerations and security at the development phase, not as an afterthought.

By subsequently making the feature opt-in, Microsoft acknowledged the oversight, but the damage, in terms of user trust and perception, might already have been done.

Not only that, but this still doesn’t address all consent issues – for example, what if the screenshot captures a video call of another user – who hasn’t permitted for that snap to be taken?

Regulatory Scrutiny and Reputation Risk

Further complicating matters for Microsoft is the investigation launched by the UK’s Information Commissioner’s Office (ICO).

This scrutiny is not just a regulatory hurdle – it is a significant dent in the company’s image.

Being investigated by a privacy watchdog is a bad look for any company, particularly a tech giant that serves millions of global users.

This investigation could lead to backlash or potential fines, but it has already tarnished their reputation with users.

Have they become the tech equivalent of Steve McLaren as the ‘wolly with the brolly’?

Don’t be blinded by AI!

The allure of AI is undeniable.

However, this incident with Microsoft serves as a crucial reminder that AI development must go hand in hand with robust privacy practices and cyber security measures.

Companies venturing into AI must remember that with great power comes great responsibility.

Privacy, security, and compliance are fundamental to maintaining user trust and safeguarding the company’s reputation.

Start right – In the early design stages, implement privacy assessments, ensure transparency about data usage, and provide users with control over their data.

The steps aren’t optional – they’re essential.

If Microsoft can get it wrong, so can you.

Before you embark on an AI-based project, stop and consider privacy and security.

Fail to do that and you’ll face a similar backlash and need to rethink – but perhaps without the deep pockets of Microsoft to pay for the changes needed.

Our Solution

We created our AI-focused Cybercy Check to provide businesses with an understanding of their readiness for an AI project – and it works!

If you’d like a complimentary Cybercy Check for your business, drop me a comment, send me a message or fill in your details here – before you kick-off!

In the meantime – think before you jump into an AI project!