Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Too rosy tinted glasses? Not all students may be amenable to improved learning m…
ytc_UgzHP61b7…
G
24:53 " According to the World Economic Forum's future of jobs report in 2025, P…
ytc_UgxBx4jDx…
G
@Jcrutch no ai doesn't take inspo AI is a machine there is no human effort the…
ytr_Ugxs8mE3P…
G
"Sure, here's a short YouTube comment that you can use to console someone who mi…
ytc_UgwLul57w…
G
A great podcast for the first hour, shedding much light on AI and the near futur…
ytc_UgxXB-K1w…
G
most people behave as machines anyway.. just one more robot player in the robot …
ytc_Ugyv6-6Jy…
G
I'm a google engneer and I get to determine what is "bias" in AI, but I really d…
ytc_UgwHjms3r…
G
😮. The more we input into these AI apps , the more others can use them to track …
ytc_Ugz1vSm2C…
Comment
They make robot to increase their business productivity, it's not for people benefits it's for the Elites.
Soon, there are not human workers needed, all are robots doing the jobs. You'll be unemployed and become poor of not getting jobs.
Later, they'll make people starve to death. Once that happen, they'll use the robots to kill and eradicate you people, cause you're no more useful for their business. You're just their worse liability that they won't care anymore.
Mark my word, AI is far dangerous than nukes - Elon Musk
youtube
AI Moral Status
2022-07-26T00:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzORxCaXFjA9isOtbx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxxukVjYsayWvV6jSF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyllMQqxUALc0E1Hs54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwFVWwiHVxwyO9L2mR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgySPgFqXfzFtX-1PPx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgztSnLOEyQlHcPA18t4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzhNJVieFeMW5Qec-F4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyh0xDKwo9LOdQ44td4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyVZM7dYJWz9bcDucN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzyLf8qx1gLsHWwHN14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]