Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm waiting for an ai to call me a monkey cuz that would be fuckin hilarious…
ytc_Ugy6DKSMr…
G
Skynet in the movie, "Terminator" explains it all. Little Note: 🤫🤔 China is alre…
ytc_UgyRqlF34…
G
Believe to have heard or read that the EU is in the process of legislating again…
ytc_UgyrR8MuR…
G
Tracing AI and stealing it back is a great idea! Just admit to it, dont hide it!…
ytc_UgxFNTCuQ…
G
No. It's not fair use. Open AI at its peak was worth 730 BILLION. Cough it u…
ytc_UgwMYEJ5i…
G
Guaranteed that data center is for AI so it would do nothing of what you claim. …
rdc_oi4693k
G
An Accomplished African American Grandmother Confronts Artificial Intelligence E…
ytc_UgwTGKLac…
G
I'm going to tell you why the predictive algorithm known as ChatGPT is never goi…
ytc_Ugy2_b2Iw…
Comment
I'm in my last year of studying to become a social worker and I recently used AI to give me some ideas to write about a method I used in my internship and how I applied that to the field I worked in. On first glance it looked pretty smart, but when I really read it, with the intent of understanding what it had actually written, it was utter garbage. AI used key words associated with the method and the field I was working in, but it made connections that made absolutely no sense. I've also heard of people using AI as a friend/therapist which worries me a lot, especially since I saw a report of a teenage boy who fell in love with one of those character AI's and he basically told the AI that he wanted to kill himself, but because he didn't use words that literally say "I want to kill myself", the AI encouraged him and he did end up taking his life. There is a deeper meaning and alternate meaning to words and a complexity to emotions and environmental factors, that I don't think AI can ever learn. AI gives very generalised solutions. I don't think AI will ever take over my job or similar jobs like mine, because AI can't replace genuine human connection. Imagine a children's home run entirely by AI, or a psychiatric institution run by AI, no human personnel. Just robots. Do you really think that would work? Well it could work but it wouldn't help people. They'd probably just be sedated the whole time. There is also trust, scepticism and faith that is needed when working with people, that can't just be replicated with algorithms.
youtube
AI Governance
2025-06-25T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyXP-0lXv4Q79kGlut4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwZU_X54wEkVbSycVN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxbmRGBNntkibQKokV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwFCCpZdarjZ6-RpIR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyo-K2iTKFIkJYFd1J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzvpvbr8Ng-HIoFBw54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw1yOheP4_jzMeiul54AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz1T3ROLjiq7Y3Ym6B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzm-FlhzIdsLwq6X_54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzbpRT3ePZiTCQ6xOt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"}
]