The Model Spec

Harri Juntunen
1 min readMay 11, 2024

--

This is a great opening from Open AI. What hard guardrails should be established at the model level, and what should be left open to interpretation? In ethics, there is no single correct solution, which is why OpenAI’s approach of empowering developers and users is commendable.

It would be far more problematic if a small group within an AI company tried to dictate what is universally right and good. A notable example of this is when Gemini guidelines failed significantly, leading to e.g. distortion of history in images.

There are some clear, non-negotiable guardrails e.g. principle of ‘do no harm.’ However, as OpenAI points out, e.g. synthetic data can have both positive and negative uses. Technologies are neither inherently good nor evil, and importantly, they are not neutral.

With AI, this is even truer than before. It is crucial for us as humans to engage in debates and decide how we will utilize these powerful tools. Engaging in these discussions about the uses and limits of technology like LLM is one of the most human responsibilities we face today.

--

--

Harri Juntunen
0 Followers

Doing the right thing is never wrong. Senior Consultant at Gofore - Helping customers to create value with AI.