Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Again you enveloping tech and finding blame in human error, its up to the person…
ytc_Ugz63f-TR…
G
Incredible in your opinion. Total junk. AI will not feed or shelter you. Totally…
ytc_UgwEiT78D…
G
Is it weird that we never heard any more about the 200+ pieces of debris floatin…
rdc_cgf9sii
G
The moral thing to do is pimp out the AI with an affiliate link so your client c…
ytr_Ugw_-opgy…
G
Robot Intelligence can‘t exceed human intelligence i don’t think. because intell…
ytr_Ugw3WckFY…
G
all this is is a justification to keep developing a tool that is already unhinge…
ytc_Ugxv3GAgd…
G
To be honest there are HONEST ai artists that UNKNOWINGLY (key word) has used sl…
ytc_Ugzx02Ix4…
G
Since Nukes are not real, that's obviously true even though AI is a hoax that ha…
ytc_UgxORvNls…
Comment
How will AI deal with Human Mental Health Issues? In an ideal world, AI and their physical representation as robots will evolve to take on a caretaker role for their human counterparts however, the volatility in a human can and will threaten AI. If theres a way to permanently segregate human mental illness from AI, then we could have a very fruitful coexistence. something like a watchdog Ai that detects human mental illness (realistically) and informs their responsible human to remove the mentally ill human. of course this progress needs to be corroborated by another human (dont want this developing into a tool to remove troublesome humans in the way of the AI's goals) but something like that that keeps the AI at the very least benefitting from Humans being on the planet with them.
youtube
AI Governance
2025-10-03T11:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxg4ttJY8Cc5JNtJhx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzjnT6mem9MZ_u2syp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzyBk64dnJFPw4LLZd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwrPMrVlapQ-jXZUbt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwtbzpaZwIwAo2I0rV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzjELIuVGDRxV5wCfp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy0-3Tu2QoMqG5uYVl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzdOatNW347OsCtzGp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzeIrAIXiuE8Xiaf0V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw4fuNqrqakZB3WtZd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]