This page looks best with JavaScript enabled

Chasm of AI Security Between Research and Products

AI research continues to amaze us, but are those safe to use in products and services?

 ·  ☕ 4 min read
    🏷️
Concerns about AI aligning with human goals have become real
Concerns about AI aligning with human goals have become real.

Photo by Andy Kelly on Unsplash

AI research produces astonishing results every week (if not daily). And it is solving important and hard problems. So, it is natural to build products around new techniques and tools coming up. But, if a tech is proven to work, is it ready to put in production?

What about the doomsday predictions: AI will enslave humanity one day? Should you really worry about it?

In this issue, I discuss these two questions and the connections they share in AI Alignment and AI Safety research.

Let’s start with examples.

GitHub Copilot: AI Code Generators

GitHub Copilot can significantly enhance developers’ productivity. GitHub now charges $10/month for it, but does it pay the authors/companies who wrote/own the code it used for training?

What about the copyright of the code it generates? Quite likely it is a copy of some code in the training data. What if that code was released under a license like GPL? Will you risk being forced to release your code under GPL?

These questions are beginning to put GitHub Copilot through necessary legal scrutiny, and developers should consider that the code generated by Copilot may be licensed.

OpenAI GPT-3: Large Language Models

It is being used in all sorts of copywriting. There are apps that generate job descriptions or even cover letters targeting a specific job.

As a Hiring Manager, you would love to use GPT-3 for posting jobs. But how will you feel when you know many of the resumes and cover letters that you receive are generated using GPT-3 too?

How about your competitor flooding you with fake but very realistic and impressive resumes? You end up spending more time filtering through it than you saved in writing the job description.

How about automatically generating thousands or even millions of invoices and payment receipts for tax evasion and money laundering?

How how to develop tools to guard against bad use of excellent and powerful technology?

Challenges Generative AI May Pose

Advances in GNNs have suddenly brought many serious questions. It will be very challenging when fake becomes almost real or, actually, more real than reality.

What if data vendors start selling you model-generated data as real data? The biases in the model that generated it will also be there in your model.

Being able to generate a super-realistic image of a person who does not exist is certainly a marvel. Generating realistic videos could be the next thing. And where will that lead our polarized social media world?

These techs give a lot of power to developers for creating cool apps. Or, is it too much power for our own good? Will fakes have a different statistical signature than reals? Will it be identifiable? There are more questions than answers.

AI Doomsday

Elon musk is probably the most famous person warning that AI is the biggest existential threat to humankind.

Most of us thought this is just an opinionated debate. But late last year, Alex Turner et. al. published a paper titled “Optimal Policies Tend To Seek Power” in NeurIPS 2021, the top-most AI conference. For the first time, the paper provided mathematical underpinnings and proofs under certain constraints.

This has impacted the debate about AI agents’ interests being aligned with humanity in the long term. It is no more just an opinion “for vs. against.” AI safety has suddenly gotten very real.

AI Alignment

AI Alignment research is to ensure that AI systems stick to their designers’ intended goals and human values. A simple example is that large language models like GPT-3 should not generate racist text.

While models like Stable Diffusion make a big splash, developers need to consider how these models will affect their applications. How to build features and safeguards against any abuse of these models. Hiring sites may have to start building “reality check models” — whether a job posting or resume is real or generated by GPT-3.

This tug-of-war is getting interesting.

Share on

Satish Chandra Gupta
WRITTEN BY
Satish Chandra Gupta
Data/ML Practitioner