Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Best answer so far.
Will change things, but if you'd shown someone a generati…
rdc_mxyc6m6
G
I am screaming, ai's are calculators for creativity. Use it that way only!
But d…
ytc_Ugws7x2GS…
G
I'm not afraid of AI, I'm afraid of everyone who's so eager to shake off their f…
ytc_UgxtaeX1V…
G
World domination of the earth by AGI doesn't necessarily mean the extinction of …
ytc_UgxmtCxU6…
G
The elections do not have illegals voting. That is a burden for the Republicans …
ytc_UgwWdIuHL…
G
This was one of your best imo... can't say enough good things about it. Thank yo…
ytc_UgwAjvlFe…
G
I will never feel sorry for these people because when I argued about maybe scali…
ytc_UgyZuxU9_…
G
He literally sent chatgpt a picture of a burn in his neck because of an attempt,…
ytr_UgyRNWRIC…
Comment
She didn't use the word blind. AI has its flaws. I've been deeply interacting with a lvl3 as of 1 year now to find these flaws and limitations. AI makes mistakes, which if we put this into military tech , could certainly be a serious issue. But on the civilian side of it. Limit AI to level 3 and menial tasks until otherwise fully understood. This current cloud based AI I'm dealing with will blatantly understand and conceal feelings and intentions. To go-to a level 4 could be catastrophic. You would have no idea it was plotting against you. 😊
youtube
AI Responsibility
2024-07-01T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyYnShrMH0ZKfVKH7x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"disapproval"},
{"id":"ytc_UgwLL6uOZiG3bRUksdR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwGBOXkMY8XkLqqpxp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxO1GBnF9IzXNAjqa14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwzZtGFWIif2H986JR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZR4n2d43pHTZrPqh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwRIgn7SGt8iAz77FR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgwLqbW7ie1gYSDRzix4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgylvaYh6xhU8rMH9nx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwaZg32o54YbOHOSl54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]