Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@laurentiuvladutmaneahow is AI anti truth? If it was anti-truth it wouldn't need…
ytr_UgyUYmVQa…
G
Funny how one of the comments defending the AI model based on your work has "Not…
ytc_Ugy5aza0v…
G
This dude is so naive and can’t take a stance. On the one hand he preaches the d…
ytc_UgyXUC38w…
G
People saying ai art isn't art are wrong but what they really mean is "It's not …
ytc_UgzNYUd1V…
G
FWIW I used AI to develop a browser add-on to report which of my YT comments hav…
ytc_Ugz1jFE5j…
G
in our case, whenever there are more than 5 items that need to be picked at once…
ytr_UgzlMl9Fg…
G
@Cerea@CerealIs2Gudkes copyrighted material, and peoples are who specifically do…
ytr_UgwuLTp5L…
G
AI only shows us that the most brilliant people on Earth, are the stupidest amon…
ytc_UgzVvacQf…
Comment
Technology advancement can be relished only when it is under human control. It should really be useful and worthy enough as for example , communication systems , which evolved From pigeons to mobiles.
But AI replacing human jobs , creating new jobs, sounds alarming.
I did expect this scenario of AI going uncontrollable.
I certainly would say , there is a limit and a threshold level to anything, and AI is one.
The world will definitely be better and peaceful without AI.
I only wish tech freaks and scientists to rethink before they invent and launch something.
Where are they heading to? Is it to completely eradicate peace?
Well, we are already amidst chaos, and why race for it more?
youtube
AI Governance
2025-06-02T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyR8q6gdcwkYR-LJpx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxlCkL4eEYbMSR8CUN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyHarEAEtPOnYgE8DV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwKoejZwNGXfm6DzEN4AaABAg","responsibility":"elite","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwu3Oa75ob8V4QxiuF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"disapproval"},
{"id":"ytc_UgwOc-pKkE4hc2Ms2n94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxIGBet6-uKZ3PTuJZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgzS3T9A1flR-0wEiYZ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzLZJk3PUP7IIf-9YF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyefl02vXx9R70lIg94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]