Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Given that Microsoft uses a lot of AI.... and the poor quality they are deliveri…
ytc_Ugy_ExVOU…
G
ai is not out of control it doesn't even exist, no software does anything not to…
ytc_UgxIMHL3M…
G
The thing with ai „art“ is that it’s not possible without artists… it could have…
ytc_UgwoWqwT9…
G
Welp people with No Skill can only complain.
Just Like AI Artist or the 08/50 K…
ytc_UgwcF88jM…
G
Well the scenario goes from the worst (from human side) and the best (from AI si…
ytc_UgxgHR9aI…
G
The question is, by what standard can you even call something “evil” without God…
ytc_Ugwa18SwY…
G
Well my idea is that a.i will be our new space explorer instead of sending human…
ytc_Ugwgc6Xch…
G
This couple cannot blame AI because they weren’t perfect parents. You should mou…
ytc_UgxeCoyQp…
Comment
yes...Artificial Intelligence that thinks at a level of a human , but can calculate and solve problems a million times faster than a human, is certainly is dangerous, and has the potential to outrun the gauardrails we humans put in place to stop them from taking over... so we humans need to prepare for the inevitable... that the A.I. will soon be far smarter than even the smartest person on Earth, and a million times faster at making decisions , means it can exponentially outpace the human race in a bid to control us , instead of us controlling A.I.
youtube
AI Responsibility
2025-06-04T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxFUx02CX1GA0smoON4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"hopeful"},
{"id":"ytc_UgxXfR6zPzET88PhFkN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyF1HqjOzCEBNO8AVp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzXmnaTi_IyMiky53x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxo4z0IKOHyLx78tQd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyyLR_VKLuHyZbRjFN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwna-OJDmDQvDKDOhN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugz9d6RzEX8N0vOMf7R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzy2FkWBitokr6XdZZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxnwzmJQx6uPEi2WHl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]