Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm more worries about the "Paperclip Problem" than a super human intelligence. …
ytc_Ugywgk6du…
G
Actually this is more dangerous than trump thinks for a reason; copyright is shr…
ytc_UgwlOGsdT…
G
There qualifying reference that determins whether or not so-called AI is conscio…
ytc_UgzBRx5ln…
G
NonverbalShoe noo... how dare you :O
hahaha... then lets try a safer approach..…
ytr_UggP-iFt1…
G
Watch Peter Diamandis moonshot podcast, all very highly respected ai scientists …
ytr_UgzkH65KP…
G
Tell your friend’s dad that even during the dotcom crash no one ever seriously d…
rdc_nom82m9
G
r/StableDiffusion community is exactly what is wrong with current "AI". I don't …
ytc_UgxUlzuzI…
G
Great idea for a video! I debated ChatGPT back when it was first released on the…
ytc_Ugzysy39b…
Comment
Reading through the comments I can see that people are misinterpreting this information. I myself have tried implementing AI into my business, it didn't go well. AI agents are terrible on the phone still.
But one thing everyone is getting wrong is that even though AI doesn't work all that well now for many human tasks, it's improving exponentially at a rate we can't even fathom. Maybe AI can't replace you now, but a new model WILL replace you soon.
I urge you all to start working for yourselves. Start a business and leverage AI, get some money to invest into assets. Don't panic, prepare.
youtube
AI Responsibility
2025-10-10T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugyq2sIWKaEIUbJclx54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxDV310JN0K4PjAj5l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgxQS5AgZnt7FrR-4_h4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwXxt3gXj6s3qR90jB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyBL9MjfSRoRoUIAAZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzoX8E9gRwPHn0H07t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyw1LWeAlLDDtOptox4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz14U9vmIluo0xlbB94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyENU-j62zleqyGz7B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy2aE_hBRkwliGw6OB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}]