Should We Be Worried?

We are intoxicated by the logic of it all.

Machines are being inserted very quickly into places that in the past were filled with human judgment. We are approaching a future that isn’t messy. Where big mistakes may not be made. But also where amazing paths may not be explored.

Computer to computer interactions are squeezing the human out of the loop and driving toward a possibly irrevocable future.

Look at hiring. Job descriptions are no longer written solely by humans. They get sanitized of what really needs to be solved. A machine determines who is worth talking to. And the people doing the evaluating aren’t even honest about what they need. They are running toward something they’ve been told to run toward. The applicant is forced to let the machine represent him. The hiring manager lets the machine evaluate him. Neither of them is fully in the room anymore.

I spent 20 years working in airline tech. I watched every wave hit this industry, from green screens to mobile apps. Over the last five months I did a deep dive into the intersection of AI and travel. I talked to a global network of contacts across the industry. I designed an orchestration solution. I even vibe coded a view into the future. And where I cut corners for efficiency, the plate didn’t taste as good as it looked. I did all of it because I needed to see for myself whether human product taste could survive the AI produced outcome.

I am not suggesting we go back to making decisions without the incredible access to information we now have. But we need to recognize where the information ends. Where the critical decisions still require judgment. Not just statistical reasoning.

That line is getting harder to see. That worries me.

Worry is only useful if it leads somewhere. I’ll keep working on what a more human path looks like. I’d rather think it through with other people than alone. Or worse, alone with my computer.

________________________________________________________________________

Notes:

Anthropic has published research that dives deep into the societal impacts of AI:

A few threads that connect to the ideas in this piece:

— Amanda Askell, lead author of Anthropic’s published constitution, explicitly names “epistemic autonomy” as a core value — the concern that AI talking to millions of people could quietly homogenize thinking and erode independent judgment.

— Philosopher Benjamin Lange’s paper “Epistemic Deference to AI” argues that AI outputs should inform human judgment, not replace it.

— Researchers call the broader pattern “automation bias” — the well documented human tendency to defer to automated systems even when our own judgment is better. The hiring example in this piece is a real world case study of that.

https://www.anthropic.com/research