Not humans-in-the-loop.

LLMs have the power to change the world of work.

For the first time, we’ve created a system that allows non-experts to communicate with computers using natural language.

Let’s explore that idea.

Behind the digital interfaces that power today’s world of work lies a complex array of machine languages – also known as programming languages.

(One could even argue that digital interfaces themselves form a language: a language of buttons, shapes, images, colors, and text.)

These machine languages are operated by a technically literate cadre – programmers, developers, coders – who instruct computer systems and their connected machinery to do things.

Non-technical folks are expected to bridge the gap: to communicate with the technically literate, who in turn communicate with the machine. These folks are the product managers and business owners – translators of business needs into digital build-outs. (Using a dialect of natural language all their own, you might say – the kind learned in B-school.)

Then along comes the Large Language Model.

A model so vast, it can accommodate (or perhaps swallow) all languages – especially those that are digitized and cross-annotated. And there’s a rich reservoir of those: websites, web apps, documentation, and code repositories, all containing traces of humans talking to machines, and vice versa.

The Large Language Model or LLM accepts input in Natural Language (English*) and translates the output into a programming language (or English or non-English natural language)

The translation is far from perfect and is only as good as good as the cross annotation. But it certainly makes computer code understandable and to some degree editable by non-experts.

What’s the right stance for navigating this new scenario?

The market seems to be pulling towards workflow automation – a front-and-center use case for the pre-LLM crop of machine intelligence. MCPs, Agents, AGI etc., are all an attempt to quickly get the human worker out of the loop.

But not quite out of the loop.

The human worker is stilled plugged in as a stop gap against poor LLM decisions. The awfully labelled (and just awful) “human-in-the-loop” approach.

Why LLM-in-the-loop is far better (philosophically and naturally).

  1. Tools are things we make to serve human aims (duh)
  2. Humans are great at using tools to do amazing things (duh)
  3. Institutions, in general, and Corporations in particular are set up to serve human aims (duh)
  4. Helping humans work better is of greater service to humans than is replacing them (umm, yeah no, makes sense, duh)
  5. Technology is deflationary and we risk shooting ourselves in the economic foot by bypassing human users for short-term gain.
  6. LLMs are probabilistic models built on top of old fashioned deep machine learning, only now pre-loaded with all the brilliance and follies of a small part of humanity.
  7. LLM use is expensive at scale. This simplest way to bring costs down is to invoke llm use restrictively, “in-the-loop” as it were.

What is LLM-in-the-loop

coming soon.

Leave a Reply

Your email address will not be published. Required fields are marked *