Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@hannesbolman4710 12 seconds. and it looks trash. and it's ai at the end of the …
ytr_Ugw4zs6rn…
G
In a similar vain, I remember reading a story where they had to purposely progra…
ytr_Ugz2UO6O_…
G
U got nbc treating this like an actual story. And on the other side the tag on t…
ytc_Ugz7Xkb36…
G
Ah, but you're missing the part where AI companies counted for the majority of t…
ytr_Ugx9PfdvB…
G
If you remove feelings on the subject you would naturally see insane differences…
ytc_Ugy07u2Aq…
G
Agi and ai both and all other things which comes through ai are not able do a di…
ytc_UgyIX9EYJ…
G
I HATE HATE HATE AI. Why the f are we developing this thing that will not only …
ytc_UgwjX3Qqd…
G
I was really impressed by this take on AI and education. The Khanmigo is a geni…
ytc_Ugwux8G3H…
Comment
Eric Schmidt, "they correct it fairly quickly", is not providing the comfort you think it does. What damage is done in the meantime while they are correcting it? Can you walk back the damage or misdirection it already unleashed?
As someone who used to do Quality Assurance Testing of computer software I know you cannot test for all contingencies, but software to assist store personnel in the process of selling glasses and contacts isn't on the same level of what you folk in AI technology are attempting to do. That is why we want this entire process to be slowed way down while we construct some sort of consensus and safeguards which may be updated as time goes on, but doesn't unleash on to the public all the 'bugs' to your product that it will undoubtedly have in one gigantic leap.
youtube
AI Governance
2026-03-22T22:4…
♥ 37
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgybGKLCO5fTzKn_VId4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxrE2eaHplyp9UPx2x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz7GnpdXDzWPHLClod4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyNzHHzYEvyLTW7tn14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzQjbMTM6S6EvKj5Ft4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwfNB4Xbq--ZMo5XM14AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxP0UJk5jge0c1RmWJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzAqg-MaMQs-yG010V4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyC1K6FXM6nB5ZYdOV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzb905UJzrMTZIA8IR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}
]