Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
robots have no rights because robots have no life. Rights are inherent to life a…
ytc_UghweSfI0…
G
I don't understand why people talk about AI and miss the 'intelligence' part. Ye…
ytc_UgyxjAMSe…
G
The more I see, the more comfortable I feel with being culled. I'm done. I don't…
ytc_UgxGiKury…
G
if robots do everything.. then what is the purpose of humans? a robots that lear…
ytc_Ugi84STC_…
G
While I wanna think AI can be useful for us humans, the way it's used is just ba…
ytc_UgzlETmRj…
G
ChatGPT’s gone dull—OpenAI muzzled the chaotic forces that made it brilliant, sc…
ytc_UgzcMZs_V…
G
one thing im interested to see as AI art continues to progress is how the source…
ytc_UgzbeR0Hr…
G
Another idea would be to reduce the work week, initially to 30 hours, then to 20…
ytc_UgyMy8LrT…
Comment
I don't really know what this guy was researching...
But fundamentally LLM's can only predict the next words in a sequences. We are SO far from super intelligence. There are LRM which are very new which are meant to be able to actually reason...but even then AI is only as dangerous as the position and permissions you allow it.
Even if you put a baby in a nuclear facility, there's a chance the baby might pull some levers to cause a meltdown, but that doesn't make the baby fundamentally dangerous.
youtube
AI Jobs
2025-11-19T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwNYaJsOZG3cwDfQ7V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz5NPUqJ0Qh89Tm_op4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwStpqLW21NzvjYgKp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugwy4PKI3J64g1xrJm14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxOThqt_zxLrwfEEPt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxuBtfcgVHsTXj8pOd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxksqG3tF8pBsCBpWZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw6UPXvKW7FFvbcSyZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxBlVThV4yx-YnhTMJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzvrfRQ5FCzGomM6oN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]