Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The key point is even if chatgpt says so hard that it is conscious you should kn…
ytc_UgzXIP40t…
G
I have enough to cover expenses for 6 to 8 months, with some extra. I was consid…
rdc_d8w5757
G
The entire 'AI art makes art accessible' argument bothers me a bunch as a profes…
ytc_Ugz62NB4P…
G
What to do about it? Right information and strategy+ppl&implimentation. Tbh. It'…
ytc_UgxDbIiuM…
G
Personally think that the creators of AI music are so careful of using copyright…
ytc_UgzOy151M…
G
its not even your art bro, if you're gonna make ai art at least credit the origi…
ytc_UgzGSGIfN…
G
Here’s the actual biggest problem… We as a global community decided that shiny m…
ytc_UgxmYQ3y7…
G
Not even just Putin. North Korea, Iran, African Warlords, drug cartels, terroris…
rdc_oi32ok0
Comment
All the bad things that could happen with AI , will happen. No avoiding it. People are dumb and will not understand the weight of such an advancement in technology. Look at how terrible we are when it comes to just social media and just general internet use. There is no way the human race can comprehend the consequences that comes with the use of AI. Even with all the movies that have been made portraying the possibilities, the human race will choose to ignore it because we can't see past our own self interest In other words, we're selfish, greedy, and inherently evil, and evil begets evil.
youtube
AI Governance
2023-06-28T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugye5P668H0sFEzba1B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxWj33fFsnXXkXy_nZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyxo1aWgTHd3EsFvwp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxX9taOsHS6xjiYKGZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyAzK72rDK1CT1gTQx4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgxMnJeU-xP1KgAC47F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw4yZixOD852d0mnmR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyUoQOOeAOeKDyGxz54AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyubwfKjUUpA6D0VMt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxsdpBH8Gv1qu9Yz654AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]