Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When it gains actual intelligence, is it still autonomous? Am I an autonomus? Wh…
ytc_UgwZyGNN8…
G
I love AI art programs! I've been able to generate loads of daft scenarios that …
ytc_UgzcAprqi…
G
"scared of ai" ...an " old head ?" With his old ass asking that question pfft…
ytc_Ugz9nZw9J…
G
That one guy, "if we don't think AI is a good idea we won't make it, of course."…
ytc_UgzxqWrvu…
G
This is why it's all of a sudden an Issue for these people. Guess they ignored …
ytr_UgzXXg0rp…
G
For More fairness in the claims, please also Include MIDJOURNEY and DALL-E (open…
ytc_Ugxhyhstb…
G
I think you make a really good point about AI and freeing up time.
AI as a too…
ytc_UgzvAcLt9…
G
Call me stubborn and oldschool but why are we not regulating things like chat gp…
ytc_UgzQ8OGLg…
Comment
Except this is not how ai works at all and it does not really think. All it does is token predictions and for each task there are soecialized algorithms which do that, as they only can do one thing.
youtube
AI Governance
2025-04-12T05:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzZMolwOKJj4PA3lsV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyJ4KhsFmpz2Jenusd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxYRThkA87qJ0dp1Ch4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw81AyY9650-K41wKh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzIyx4KlvraaRBNndh4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzYng4MeafXkkqNmaN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwgQKjpMxvwEKFUqqR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxHYDqE80OyYKpcVcp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxUQfgbLDKzGHvYjIx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy71ak3uiG1n6qK9JV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}
]