Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai isn't awake, alive, sentient, or conscious, and it never will be. It can be …
ytc_UgxEXOx0M…
G
Im on the artists side after watching this video. I think the wrong thing is tha…
ytc_UgyrZ6ANb…
G
Do not confuse what you assume to be generative a.i. with what is actually imita…
ytc_Ugx35aEuM…
G
It is? I js went to an ai art site to see and yeah it does lol…
ytr_Ugzuq59zS…
G
AI is just a tool. What you do with that tool is up to you. It's similar to weap…
ytc_UgwZN2BKe…
G
Yeah ok 2 hours of ai gets you 2 years ahead. Stfu you could graduate high schoo…
ytc_UgxnRcg17…
G
public schools should be no more than an oasis for making friends and learning t…
ytc_UgzF3B0nu…
G
Very good topic. I actually have a good understanding of AI and certain types of…
ytc_UgyKboeUa…
Comment
I see two problems with using AI for therapy.
1. While AI can give useful responses, it can also be stunningly wrong. There is no way to tell if its response to a therapy situation is helpful, irrelevant, or harmful unless that response is reviewed by a qualified therapist, in which case you would be better off talking directly to the therapist and not wasting time with the AI.
2. AI has 0 privacy controls. Anything you would not be willing to tell to random strangers or print out and leave in a Starbucks for anyone to find should not be put into AI. I work with PII, and my employer's official policy on AI is that any use of AI for work must be approved by a company executive prior to putting anything into AI, and a significant portion of that approval process is to guarantee that no PII is entered into AI.
youtube
AI Moral Status
2025-06-28T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy_0Yiq6puBrmX4coB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxh-Bm_YymwWxTI3m54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwqh0rrHhR-qUOSKb14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwtuoLDp_XvJwS16Dx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwheWs0t7QAzBoyBGd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzSC9aKF72CP5bkUXJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugzu3Q6gf85weFslm7t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxXTXb7TiPflpK5yw14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy5l8GTGo10PHq-KS94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy5QNFi_Jqybf2iU5V4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]