ChatGPT
The release and widespread use of ChatGPT – a large language model (LLM) developed
by OpenAI – has created significant public attention, chiefly due to its ability to quickly
provide ready-to-use answers that can be applied to a vast amount of different
contexts.
These models hold masses of potential. Machine learning, once expected to handle
only mundane tasks, has proven itself capable of complex creative work. LLMs are
being refined and new versions rolled out regularly, with technological improvements
coming thick and fast. While this offers great opportunities to legitimate businesses
and members of the public it also can be a risk for them and for the respect of
fundamental rights as criminals and bad actors may wish to exploit LLMs for their own
nefarious purposes.
In response to the growing public attention given to ChatGPT, the Europol Innovation
Lab organised a number of workshops with subject matter experts from across the
organisation to explore how criminals can abuse LLMs such as ChatGPT, as well as how
it may assist investigators in their daily work. The experts who participated in the
workshops represented the full spectrum of Europol’s expertise, including operational
analysis, serious and organised crime, cybercrime, counterterrorism, as well as
information technology.