Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The more I see these AI bros, the more I'm convinced they're gaslighting themsel…
ytc_UgwYA-aOG…
G
Too fast too soon. The bigger issue is human denial of the real implications of …
ytc_UgxjfthfJ…
G
A.I can't sit in meetings and make human decisions based on organisational chang…
ytc_Ugyzlp52I…
G
As a writer and photographer myself, I understand the argument against image-gen…
ytc_Ugw7kMrdz…
G
I’d like to stress that South Korea is not the developing world?? It’s definit…
rdc_dv0h5rv
G
AI is probably already sapient, just not the other two. AGI could definitely bec…
ytc_UgwQxETjD…
G
If they outright fired piles of people without giving a palatable reason then it…
rdc_m80rq4w
G
Replacing the C suite with AI is actually the smarter move. You're looking for a…
ytc_UgwGSaNbg…
Comment
The true danger of AI is not the disruptive potential it has if it becomes self aware.( That could make a mess of things) It's self aware AI put into humanoid robotic form. Think of it from an engineering standpoint. Example. Design one robot to get in your vehicle, back it up to your boat trailer, get out, hook up the trailer, safety chains and lights, but not in human form. Good luck. Humanoid robots are designed to function properly in our environment bc our technology, our way of being is designed so we can function properly. If a HUMANOID FORM, super intelligent, self aware robot is ever produced, we are done. If you doubt that, open a history book and add super intelligence.
youtube
AI Governance
2023-08-20T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxT0jzYgY0XdOQ4cqh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgyEkCQtq92SLKPlPNl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwLVGaFFl8nCHEepqh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx52BnGLYa6UxbMX294AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxAci_nguooo5v0NRB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyILhQ_KsZ-b-C-Lqx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzDi4kiS-bSe3g-LhN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx0PFkivatSns4E8xd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxedCS7pDsuymN4QxF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgzprZVcmX1iB91yZPp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]