Skip to main content

ChatGPT at Biola: Is AI Safe to Use?

May 15, 2023

Humanoid robots sitting at a conference table having a meeting.

Article at a Glance

  • ChatGPT and other AI services are tools and we need to use them responsibly.
  • You must never upload sensitive data to ChatGPT or other AI services.
  • You may upload public or non-sensitive data to ChatGPT to help you with your work.
  • Biola IT is planning to implement a locally-managed AI service that could be used to process sensitive data.
  • AI Chatbots can be confidently wrong.

Last week, Samsung made headlines after reporting that the company had accidentally leaked highly sensitive trade secrets to ChatGPT. Since the incident, Samsung has banned the use of ChatGPT and similar services on all employee devices.

Over the last several months, ChatGPT and other conversational artificial intelligence (AI) models have taken the world by storm. Entire companies have sprung up overnight offering AI services, and tech leaders around the world called for a temporary halt in AI development. Italy banned ChatGPT entirely, and other countries are considering the same action.

In the month of April, Biola faculty, staff, and students visited ChatGPT 83,000 times from Biola’s network. Since January, 26.7 GB of data has been uploaded to ChatGPT from Biola users.

Do we need to be afraid of ChatGPT and other AI services? Is AI dangerous, or is it simply another tool? Can Biola employees use ChatGPT for work? Should they?


What is ChatGPT?

ChatGPT is a conversational AI developed by OpenAI trained on massive amounts of text data. ChatGPT and other Large Language Models (LLMs) can generate conversational responses to text input, and can quickly generate text in various languages (including computer code), process large amounts of data, and even perform certain kinds of problem-solving tasks.

LLMs are effective at both creating new content and analyzing existing content. Users will often upload data to ChatGPT and similar services in order to process and summarize data.


How you MUST NOT use ChatGPT

You should never upload sensitive information or proprietary business data to ChatGPT or other AI tools. Anything you share with ChatGPT is retained and used to further train the model. It’s even possible that your uploaded data could be used by ChatGPT to answer someone else’s query. OpenAI specifically warns users against sharing sensitive information.

ChatGPT must not be used with P2, P3, or P4 data, as defined by Biola’s Data Classification Standard. Never upload sensitive information to ChatGPT or other AI services.

Here are examples of information you should never share with ChatGPT:

  • Personally Identifiable Information
  • Financial information, including financial aid or loan information
  • FERPA data (including student ID numbers)
  • HR or personnel records
  • Exams (questions and answers)
  • Classroom or meeting recordings
  • Proprietary business information

Also, Biola does not have any data protection terms or breach agreement with OpenAI. In March, a bug in ChatGPT resulted in sensitive data and payment information being leaked for 1.2% of their subscribers. Without data protection terms, Biola would have no protections for our users or data in the case of another data breach.

You may be tempted to ask ChatGPT to help you parse a large report of student marketing analytics, or to bugfix your code for one of our web apps — Do not do it.


How you CAN use ChatGPT

ChatGPT should only be used with P1 data, as defined by Biola’s Data Classification Standard.

ChatGPT and other AI services can significantly improve productivity, but they must be used responsibly. Here are some examples of tasks that a tool like ChatGPT can do to help your work:

  • Write a job description
  • Fix an excel formula
  • Summarize a government policy
  • Draft public marketing copy
  • Create ideas for Instagram posts
  • Analyze public data, like SEO results
  • Generate a python script

As you can see, none of the above examples involve uploading proprietary business information or sensitive data to the AI service. Since no sensitive data is being shared, the usage is appropriate.

Most often, AI tools should only be used for generating a first draft, and results should always be taken with a grain of salt — AI Chatbots can be confidently wrong.


The Future of AI at Biola

ChatGPT and other AI services are tools, just like any other technology. We do not need to be afraid of them, but we need to use them responsibly.

Biola IT is currently scoping a project to implement a locally-managed AI service leveraging Microsoft Azure OpenAI. This service would be protected by our data security terms with Microsoft, and would be able to process sensitive data (P2-P4) without sharing it publicly.

If you have any questions or a use case for how your department could improve productivity using Azure Open AI, let us know by sending us an email at information.technology@biola.edu.