Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
At first I used c. Ai for fun now I act thirsty for the ais…
ytc_Ugx1aRzMs…
G
Anyone who has had to deal with using AI knows the only threat of Ai is the assu…
ytc_UgyWl78BZ…
G
You are not drawing with AI, you are not making art, it's not the only way you k…
ytr_UgzhpXii2…
G
as someone who works in the field (of AI), I think what's most startling about t…
rdc_fcs99w9
G
Soon will come the day if not already the design intent for AI where simply givi…
ytc_UgxRVhWEU…
G
How about this for a law: only new companies can use AI. This way, everyone keep…
ytc_UgxhTlPZr…
G
not always easy to bypass, but if you wanna check if chatgpt text sounds human, …
ytc_UgxvTCpOE…
G
Imagine someone showing up 600 years after wards and you believe them over eye w…
ytc_Ugz5W5MWN…
Comment
So what happens if a truck has a blow out or some part fails that a human would normally identify in the pre-trip? The truck can’t automatically repair itself, it isn’t a transformer. What happens if the truck gets into a fatal crash and is found at-fault? Who gets the consequences and what are they? Is the truck able to detect concerning conditions and not go flying 70mph through a school zone or construction zone? There’s so many safety concerns here. I don’t understand this.
As a human truck driver, it is already a task to be safe and avoid mistakes so that this 70,000 pound missile doesn’t plow through someone’s minivan and kill their entire family. Now think of an AI having to make those split-second decisions or react in such a way that has the best outcome
youtube
AI Jobs
2025-10-15T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugz8pY3y4ExYejwUcjl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzeW3v5IYBG2wcmWQd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzN_4-u8AuqqkchBbl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx3PrrBlNtMhjuUtdp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugws794Sx0yDuvSoB9p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwteoxVOkahQB1OuiZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxeV2vPf74tRGetDsp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzD5qYg08TmXS3FpbV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxEJ-LSJIMft0kEmel4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzG1X80ckY6njccmit4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}]