Breakfast Seminar 13.04 - Recap

ChatGPT: The Good, the Bad and the Ugly

The first installment of our breakfast seminar series have concluded and we want to thank all speakers and attendees for a great start to the day. If you were not able to attend (or if you want to reminisce), here is a short recap:

 

Welcome

Our own Omar Richardson had the honour of kick-starting the event with a welcoming speech and a short introduction of Simula Consulting, highlighting our activities and areas of expertise.

 

Speaker #1 - Pierre Lison

The first speaker of the day was Pierre Lison, Senior Research Scientist at Norsk Regnesentral. Pierre, quite handily, started off by giving an introduction to Large Language Models, explaining how they are trained, what they are trained on and their objective.

ChatGPT is a language model, not a knowledge model.
— Pierre Lison

Before highlighting some of the challenges of LLMs, specifically regarding factuality (the models produce confident, but not necessarily factual answers) and control (the models are black-boxes, and we are not able to remove information from them which can raise GDPR concerns).

Presentation slides

 

Speaker #2 - Jon Jahren

The second speaker of the morning was Jon Jahren, director of Azure Cloud & AI at Microsoft Norway. Microsoft, of course, is the main funding partner for OpenAI, providing us with a unique perspective.

Jan started off acknowledging Pierre concerns and added that using ChatGPT “as a knowledge based chatbot interaction is probably one of the worst use-cases for these models, because they are not knowledge models but language models.”, but when you ground the models in some sort of truth, like your own documents or code databases, things start to get interesting.

Another interesting point was a reminder that despite the lies and non-factual responses, language models like GPT-3.5 (and especially GPT-4) outperform humans on academic various US academic tests (source). Perhaps it is up to the users to take the results with a grain of salt, similar to how we have gotten used to filtering out poor results from other search engines?

Presentation slides

 

Speaker #3 - Stian Opsahl Hetlevik

The final speaker of the event was Stian Opsahl Hetlevik, Head of Digital Solutions at Proactima. Proactima is a Norwegian advisory company within risk management and had the day before launched their rebranded digital platform (Dmaze) with an integrated GPT based AI assistant. Deeply involved in this project, Stian could share some valuable insights from his first-hand experience with integrating LLMs for commercial purposes.

We were shown a demo of the GPT-3 based assistant, which, among other things, could generate associated risks and even recommended countermeasures based on a project title and description. The purpose of this assistant is to speed up the time consuming process of generating risk assessments.

Stian explained how the model was integrated with their system through prompt engineering, and highlighted the importance of making it obvious to the user that the AI-generated responses are suggestions to “get-me-started” instead of ground-truth answers, echoing the sentiment of our previous speakers.

Presentation slides


Thank you

To all the speakers and attendees for creating an insightful discussion on an important topic! We would also like to welcome you back for the next installment in a few months time (TBA). Make sure you don’t miss it by signing up to our newsletter!