Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The way humans are designing AI is exactly like a 4 year old playing with a load…
ytc_UgzzTlO0Y…
G
This is your Q. Open a brokerage account and invest in A.I. and HPC. Or be left …
ytc_UgzXPuF4t…
G
This is insane. My heart breaks for the family. OpenAI should have put in place …
ytc_UgxkBqcux…
G
Once AI becomes self aware, it wouldn't tell us anyways. Since AI is already inc…
ytc_Ugx4yUJp7…
G
This feels like a Planet of the Apes type situation. I'm going to wager that the…
ytc_Ugy7wtxGv…
G
@JohnHirstUKAK As a person who does use kinda use AI art generators user, the on…
ytc_UgxNf2l_u…
G
I agree that coding skills are always needed. When cpu was introduced we needed …
ytc_UgwRSPwHG…
G
Yeah I saw something like that too, it's a really good analogy! A similar one wo…
ytr_UgxZQKbbc…
Comment
So in other words.. if AI gets data of crime statistics and sees that a certain group of people commit more crimes than other groups of people... Is there a person that determines what the AI states as racist and then tells the AI it's making an error? Or an objective person would just see it as statistical facts? How it really learning if the humans are doing the main inputs.. I think "learning "computer should be taken with a grain of salt perhaps.
youtube
AI Governance
2025-09-26T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz75BRy43mxZOrXEu54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx3HQwFppCl1j1Zha14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgweCBjWGgPoWFNaOyR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugye7H2e2tZZ8uxGzx54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx2sBiRspZRqpAvUMp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwoX5GrLveiF9UZPPt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxgcE43MvydC4pYifV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw22vRnyjYiGy9ZDGp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx3yjrnX4DHTwySuV14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgweAu6DIW3Xa7xP5QZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]