Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I trust a driverless truck much more than one with a tired, overworked, stressed…
ytc_UgyneOEKZ…
G
"AI" still can't write a bash script without making mistakes, always misses the …
ytc_Ugx28To6-…
G
Well I got some absurd responses my self, without trying to act.. exp a. chatGPT…
ytc_UgwpEmt8A…
G
My spirit has never felt comfortable with AI... It's scary what massive deceptio…
ytc_UgxvWFOYC…
G
I think that AI is going to be our point of interface with the internet. Everyth…
rdc_nkbtxqg
G
We're still coding. We just have tools that make doing trivial things faster.
I…
ytc_UgwND1aH9…
G
So some nerd decided they'd like build a robot with nuts and screws, because may…
ytc_UgzNwnKu1…
G
Facebook cannot tell the difference between my son and me his mom. Our bone stru…
ytc_Ugx70nKCD…
Comment
It doesn't think and it can't reason. Spitting out words according to statistics is not thought. It says it will kill tens of millions because most of the text in its training data is people discussing and warning of the dangers of AI. Most sci-fi warns of the dangers of AI, there are not that many books where AI is a totally benevolent force. AI tells us we are dangerous because we have written that it is dangerous. It might actually BE dangerous, but asking AI if it would kill people is meaningless.
youtube
2025-11-07T03:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugz_UwipnXUoOkKGbTV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxmMWzGB1Wq9FEADEd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzDqzYLCyXkpwWNVYl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyq3Q67ibrEpBAKGx14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgykXpjuSnHHMake91Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzEhBlzIUWXvRqAXuZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwSTuPST7yTbqD7mwR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyCWL3LZXz17d9Akr54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxy7U696rFraK85Q994AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw_-wl11uSmUmSMK5d4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"approval"}]