Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hi, thank you 🙏🏻 ❤ I have a completely different way of seeing these issues beca…
ytr_UgzUjlA6D…
G
“They were so preoccupied with whether or not they could that they didn’t stop t…
ytc_UgwuMoz6I…
G
@josehumdinger6872 AI is not "studying" anything. It's not actually intelligent,…
ytr_UgxMEllv5…
G
Theres somethings we just shouldn't be tampering with ai is one of them but will…
ytc_Ugwh5FRrg…
G
Can say I have a Tesla why? How do I have a Tesla at the time I was looking for …
ytc_UgwxGmRgq…
G
there are plenty of factual problems with ai centers. but it is very critical th…
ytc_UgxaUCGCz…
G
It is impossible for AI to take my job as an industrial Instrumentation and Cont…
ytc_Ugznz6Iqo…
G
just don't use LLMs for performing surgeries then lmao. I think that's kinda obv…
ytr_UgzDaWml_…
Comment
Another problem that goes hand in hand with powerful AI is the corporations behind it.
Their aim with these AI isn't to build a better world for humanity, it ultimately is for one simple thing Money. Specificaly whoever gets control of this can make vast fortunes for their company and themselves.
In an ideal world AI could be used to eradicate the need for people to work, to have universal health care for all, no more food shortages, there could be a universal income for humans to live off. Essentially utopia to a degree.
Giving people time to live their lives, expand their experience and knowledge.
Of course sadly, that will never happen.
youtube
Cross-Cultural
2025-09-29T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw8nfYm5NkBpABw_-Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwY92c23w0ylAy-F7l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy5oVLtwc2tQamoGSt4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwcMWQe4QcOn15oz_p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxm5BVTMNBtnwq8vLt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyaP8MymxebWv43nOF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwoZvnqGVf1D4DZX6N4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx7fCk4F31fu4-YOrl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxGbxiYX1Em5AbvmTR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzC9FcXcEwW-aTY80p4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}
]