Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you want to know why Google is so adamantly against ai ethics, watch the star…
ytc_UgwliiOWQ…
G
1:06:20 At what point does consciousness start? At the boundary of context one i…
ytc_Ugz9PWc3V…
G
@josehumdinger6872 There is no need for an argument if a program creates the wh…
ytr_Ugw4oiBiw…
G
can we just name one of these A.I. programs Ultron and be done with it.…
ytc_UgyiaXFFl…
G
I feel so sorry for her
Edit: WTF ARE WRONG WITH SOME OF YOU SHE DID NOT DESERVE…
ytc_UgwK0saft…
G
I'm really not. Yudkowsky's Rationalist cult/cult-like group helped birth some o…
ytr_UgwtpQPsn…
G
what's even crazier i get attacked from both pro and anti ai people i built my o…
ytc_Ugx2-VNmh…
G
When the robot takes wrong turns and not efficient like humans, costs to ride wi…
ytc_UgwEfREkH…
Comment
So I a friend of a friend who used AI as a reference because the images he was looking up for references weren't uncanny enough so he used AI to see if he could get the reference correctly, he could match it perfectly so he used both a stock picture and an AI image as reference for his drawing.
He is currently disable due to an injury and he would tell you that only use AI if one, you are trying to learn how far AI is as of late, and two to learn to spot human creation from AI generated. He would also tell you AI won't help unless what you are searching doesn't make sense such as dream logic.
youtube
Viral AI Reaction
2025-03-31T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy3wUrBNLqL4G99WtR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyzdmsOii5cJWfvaah4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzyL5epiGfgTEeLpjF4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw2L029KG5T47z9ROt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzTAW-ZEWgPrdi0-Eh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx8sOKHIE0egHbQ0Q94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwLpEJDtO44MC5XB_x4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxTUx2K_E2HQSRmzRJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz9qdZanqINmMhNSGN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyEAjb79ZUlA6vcXPx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"}
]