Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@stevesmith7843 The more human-like AI becomes, the more important it is to show…
ytr_UgzZvf3NK…
G
But he's wrong. Current military AI is able to track human movement with near pe…
ytc_Ugw4AxsPG…
G
What corporations currently call A.I. are actually Large Language Models (LLMs);…
ytc_Ugyd59r-Z…
G
Wow. Calling the kettle black. Mainstream media has been agenda-driven for decad…
ytc_UgzEot48h…
G
Companies want to use AI to make more money, but the thing is, money is a human …
ytc_UgxIpIkmt…
G
pls don't use ai for thumbnails, even doing a funny sketch in ms paint would loo…
ytr_UgwLqfS3O…
G
After one lifetime in denial, three days ago i became SURE that one (i won't tel…
ytc_UgzOJvG-q…
G
Just understanding a large language model is the beginning of a consciousness th…
ytc_Ugy3jcmle…
Comment
Listen aur samjo meri baat:
AI scans, reports, protocols de sakta hai, par patients ke context, emotions, comorbidities aur lifestyle samajhna machine ka kaam nahi hai
Patients aur courts kabhi machine ke decision pe 100% trust nahi karenge. Har AI system ke peeche ek licensed doctor ki approval legally mandatory rahegi.
Jab calculator aaya toh log bole the “mathematicians ki need khatam ho jaayegi.” Jab autopilot aaya toh log bole “pilots ki need khatam ho jaayegi.” Lekin kya hua?😂😂😂😂 Ab bhi pilots aur mathematicians zyada powerful ban gaye hain, AI tools ke saath.
youtube
AI Jobs
2025-08-22T20:0…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzRRl0GxDNGJww5RqB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz-Mt71qjARW8fpoNl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy5WByzl6rwng3nyKB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy7KdxpbK9GMQZKYLt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzsJrrZbcR5sjLxwNx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx3m5XSTfHbqw650rp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgywNQqgJ5GRTgvE3dt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwBf_2wiXuE-k9hY1p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwb9T7WVRGkknGKLOl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyDm4-UKNOkd87RibR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]