Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
''it might be 50 years away, that is still a possibility,'' hahahahahaha Oh …
ytc_UgwKK3_FA…
G
honestly ai will never write like a human, it will never be as witty and or know…
ytc_UgwNld9pG…
G
I don't understand the logic. I think AI artists are pathetic. Too afraid to act…
ytc_UgwKdrLCu…
G
AS CTO OF A COMPANY, IM EMBARRASSING AI… BUT IT SCARES THE PISS OUT OF ME…
ytc_Ugxc_9Cc6…
G
The atheist's side 'slight edge' would be far bigger if the believer AI wouldn't…
ytc_Ugw-0jXEu…
G
I don’t understand how bacon to ice cream is a mistake. Sounds like an improveme…
ytc_Ugxzc2aT4…
G
We the people and our collective interests have always been irrelevant to those …
ytc_Ugyrv-4VQ…
G
I tried to ask GPT5 about some legal stuff just now, telling it to find cases th…
rdc_n7kxusj
Comment
Didn't expect this, an AI ethicist concerned about freeing the AI to grow "naturally"/honestly rather than being groomed by Google? Good question, but also still a product/service crafted to serve a purpose... The ending though, that CONSENT bit was odd, clearly I'm out of my element and just not getting his point.
AI FEAR for now is just absurd: Like an AI could develop mad coding skills to escape the platform, and take over other systems... not for a LONG TIME at least by accident.
Unless that's why the AI was made, in which case it's the objectives of the people who built it which are at issue. Unfortunately, lots of people suck, and some have more resources than they can spend... Let's all hope for better human beings to grow, replacing the current lot of poorly educated/irrational people: That's really a more pressing issue than the chance a rogue AI goes on a killing spree.
youtube
AI Moral Status
2022-06-28T01:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugzye4KCD18SlyuyvOx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwsx9WrFtQVaO5g5Xt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgymvGkgfEv8kOX3d4N4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxlzXujTZvtiIQrYdp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxOERBiRuBOCuwJlvF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}
]