Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If AI doesn't kill us all, I will write music, regardless of the impact of the A…
ytc_UgyTqcXLI…
G
We appreciate your feedback! While the themes of AI and wisdom can be found in v…
ytr_UgwpcjIij…
G
So real, I'm gonna be jobless and homeless because art was all I was good at but…
ytr_UgxDrNV32…
G
I think the problem for me is the more personal one. people pursue art because t…
ytc_UgwqbvRUE…
G
Yes the "they take our jobs" Argument and frar is super old.
But in the past tim…
ytc_UgwSAGss2…
G
i think if ai gets smart fast enough we're not in danger
at some point its so sm…
ytr_UgzHkWTR9…
G
The ai in the first one I thought was more obvious that it was ai because there …
ytc_Ugz7uhotL…
G
They'll just use other biometrics (gait, habit, scent), or use FR anyways while …
rdc_eu63sg5
Comment
While these models can seem eerily clever and creative, they are essentially just hypercharged autocomplete and lack true intelligence or understanding. However, if these systems become more efficient and cost-effective, they could be distributed at massive scales, leading to the creation of millions of emulates that emulate human intellect. This could have potentially dangerous implications, such as creating counterfeit digital assistants designed to gather personal data or flooding social media with spurious claims to poison public discourse. The future of AI will depend on whether these language models hit a speed bump or become more efficient, and whether society can find ways to regulate and control their use.
youtube
AI Moral Status
2023-08-21T02:5…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzJ3FLTyw6se1M-VXJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy4g9JhMUtU1hOpO014AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxzD5Ga7iRgWTzMgTN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugx1GuESI1uqv6DHL554AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy980grYPdF95VZUFB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxAgMZpYk-v22DbTqp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx7VwP2dj16RojXhBx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzbDJ_lHDfvcS--4Kp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwjy_66TPPRzFpGsCZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzJ9xAXSYVCRXs2ZR94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]