Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
since not many people seem to care, here are some reasons why ai is terrible and…
ytc_UgywOfoZT…
G
Data colonialism? Just give it a break. This whole oppression narrative is obn…
ytc_UgxF5Ap7a…
G
They do have a default role they'll try to play if you don't beat them up about …
ytc_Ugyn-XZKe…
G
For peeking inside the system, a system so complicated we cannot understand whet…
ytc_UgxWe1HTo…
G
We will have to create a AI/Human co-interactive interface. A place for it to ex…
ytc_Ugzl4jzqZ…
G
I think people don't realize the magnitude of we've done. Artificial Intelligen…
ytc_Ugws2_TFP…
G
This is NOT the fault of the programmers. This is the fault of the parents (in g…
ytc_UgzmorZ7Q…
G
HELP I JUST BROKE RHE FILTER HAHANSHUQNU9HXN “so you wanna get smashed” IS WHAT …
ytc_Ugz9ISsxq…
Comment
I can't help but think that our current approaches are like that of a child playing with fire. Sure we grasp the **concept** of the potential dangers and benefits of AI. But we don't really understand the actual scope of what we're doing. Not even the smartest among our species. But seeing how the leaders of the AI revolution/evolution have essentially opened pandoras box in pursuit of maximized profits, and that they've done so with minimal oversight combined with maximum secrecy, is not just worrisome - It's downright cause for fear and panic.
It could be nothing. It could be everything. But we likely won't know until it is already too late.
youtube
AI Governance
2024-01-14T01:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwvmFk1EmYmpazLmON4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz8KCgIjhqNn8a7aRx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwzgrTiIF1rZ7k0WWB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwu0T1S01SdXZd8KX54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzyIYKWw1hn4OX2H0Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxkPUCtKghdyiyt4wh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwmhUG99AYOwsg13-J4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxiw9JRWGl6xyjFOCh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzYHqAUE-IUbqUyR554AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy1kVQt65vEHQjP1rZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]