Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Adonis_james funny that you say that while literally using tech to make the c…
ytr_Ugyl_EuSV…
G
Your going to have to come up with something else because if a robot was taking …
ytc_UgxlTJdIw…
G
Why was AI created in the first place by the agencies.....was it for human benef…
ytc_Ugz-rXle9…
G
If he would've used stable diffusion I still wouldn't be on his side, but I coul…
ytc_UgyOGE2zH…
G
The Turing Test ist not really a test for the AI, it just confirms how gullible …
ytr_UgwibThJF…
G
I know AI is advancing and it's a risk, but can we stop using what AI company CE…
ytc_Ugx2C8Ecn…
G
Ai is programmed it litterally is non artificial non intellegent we build it and…
ytc_UgyQkS5Eq…
G
The executives could totally be replaced. AI sounds like the final nail in TV’s …
ytc_UgxOhykPP…
Comment
A.I does not have a mind or soul. Its not dangerous until someone, programmes it wrong. Its always safe if it is created in the right hands. It cant think, Not by any chance, its impossible. So hope that a.is power dont fall in the hands of some dirty humans.
youtube
AI Governance
2024-10-18T10:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzL3Cx6G-c0YkBel8p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx23xZwUXaoj0WCdNV4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwTybgz0sXM3IzWswh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzqDeCdOYSs1YKmDhJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzD_U0iFqJz_9sUlHl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwerhOa1tpZJa407DZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwHG4Irjc9WM2l82VB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy0ML0-XsD9u6Mq0lF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzM0bojeyRCvl9jAgF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx0rnHkdgKrAsbDEIt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}
]