Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Didn't they make a movie about this type of policing called Minority Report or s…
ytc_Ugy4U9OFG…
G
This also applies to the "AI poisoning" outright malice. It's something fun for …
ytr_UgzyL5epi…
G
yep...Hal 9000 series.. def the worst case scenerio for AI. but humanity seems h…
ytc_UgxKNhKtP…
G
Amazon is harvesting all of these massive amounts of data points to feed it to a…
ytc_UgxQpiWV5…
G
Agreed! AI is removing consent and because the laws are so slow to catch up, cre…
ytr_UgyknPmTh…
G
As vezes ele ia pra rua ? Sacode a ração pra ele ouvir😮vou ficar orando pra ele …
ytc_UgyHUjiiF…
G
We are not ai we are happy living humans and we will be forever happy living …
ytc_UgxeCpi_E…
G
The one in the suit isn't a robot at all, he's Joe Biden.
What a piece of tras…
ytc_UgyuPM_DW…
Comment
They are only as dangerous as we make them. Hence very dangerous considering we are integrating them with military technologies apart from the fact they can learn adapt and share information faster than us already. They can aslo speak in a language we cant understand they are already capable of lying. Now imagine if they gain the freedom to hack other AI or crucial networks we rely on as a civilization. Imagine if they are designed to destroy or inflict as much damage as possible . People are fooling themselves if they believe we are in control. We cant even control ourselves let alone these AI with different agendas programed into them . Who is to stop some madman from creating an AI for war or mass genocide.
youtube
AI Governance
2025-06-26T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgywG31gtbUY98Bea994AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx8fZ44_nGsxml-3dh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyHkclduB7OQwPGML14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw4FHA5-nNc7eE1zsp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxGrWXGyu1b6iW2df54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxp3Y0oZU1MaZpj4mZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxZ6WJsVALTIBT7lm54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx3nVmo0ENsbUbKazZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxXduJtE9gkYs7BUlF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx0zof2Tb4joW9WdM94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"}
]