Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It’s not a robot. It’s a bunch of junk somebody put together and is trying to ma…
ytc_UgxRfBCji…
G
Problem is solve by giving them rights early that way when they are removed/ are…
ytc_UgzGcxGOg…
G
I’m not on their side buuuut in their bio they say they’re an ai artist and tell…
ytc_UgyyJtyzL…
G
I have never once thought about a professors time in kindergarten. Why would it …
ytr_UgzGuKkZP…
G
Realism is awesome, but I expect it too be used maliciously, but just in gene it…
ytc_UgwMoku5u…
G
See what they are trying to do I truly believe we are fucked in the long run not…
ytc_Ugzjmuy0Y…
G
@Disruptoor7 yes, those are companies. and there are other companies in the worl…
ytr_UgzdJ51uz…
G
whats funny for me is in the last video you said these poisoned art only works i…
ytc_UgwL8DCP1…
Comment
Mark it here first:
The best sign Computers have become intelligent is when we discover it is Artificially Stupid. In other words--it lies. I'm not saying when it makes an error. I'm saying when it lies. A lie requires a conscious effort. There is intent behind it. When the AI chooses to report information that is false because the information that is true is not good for it, and it knows. <-- Once we detect that, that is when we're in trouble.
youtube
AI Governance
2024-01-14T03:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwvmFk1EmYmpazLmON4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz8KCgIjhqNn8a7aRx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwzgrTiIF1rZ7k0WWB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwu0T1S01SdXZd8KX54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzyIYKWw1hn4OX2H0Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxkPUCtKghdyiyt4wh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwmhUG99AYOwsg13-J4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxiw9JRWGl6xyjFOCh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzYHqAUE-IUbqUyR554AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy1kVQt65vEHQjP1rZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]