Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
as both a software engineer AND a person who draws at an amateur level (I don't …
ytc_UgzhIBT5Q…
G
First of all, you are referring to tracing, which has been around forever and ha…
ytr_UgzOoASRl…
G
Hundreds of thousands of words and pages have been written on the topic of AI Ri…
rdc_l4os7er
G
Definitely NOT stealing! They gave you a task, rated you by quota, left it up t…
rdc_hkfn9f4
G
I'm a lawyer and I've found that ChatGPT has actually been mediocre to bad, but …
rdc_jhb07ds
G
You train an AI on human natural language artifacts and it will act like it has …
ytc_Ugx218Xke…
G
You said in 2-3 years Tesla will be smart. What do you think Wayno will be 2-3 …
ytc_UgwfuoSFI…
G
They say automating jobs will lead to growth in different fields, but companies …
ytc_UgxKjL406…
Comment
Yes lets ask an astrophysicist about the workings and dangers of AI. Same thing right. He does not seem to understand that it does not matter if people want to use AGI or not. Geoffrey Hinton, one of the founders of deeplearning, has outlined why a development like this is so dangerous. At some point there is no option not to use it, governments will rely on it and you are basically bumping humans down a step in the food chain
youtube
AI Moral Status
2025-08-10T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx4xk6sitWbQt_oU-h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxFUAurxBBABkLBXhV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgykfZH_ODyA0wKTd-V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxJU-DKv0ylNVRO4B14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy5XcWHLVzawvqPbSV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy6lJebl7626QoONc94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwBXQWUN6fxWDWOaCN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgykDaCCgUiyURTgUpZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwGIaumVBNSuuOeYDZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugywmk47QRlSP6nqSst4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]