Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Nope. Some companies are testing with AI now and yes, accountants in that compan…
ytc_UgxwW8UkB…
G
eventually only the super rich will buy and sell with other super rich - the res…
ytr_UgwjTAnWA…
G
The other 2 people on this panel besides Sam is questionable, other leaders in A…
ytc_UgxOXA6s9…
G
If AI takes 25% of jobs, it also kills 25% of consumer demand.
AI companies thi…
ytc_UgwOD2U2y…
G
but he programed the art.sorry the AI tool to make the art.He didnt pay fake ar…
ytr_UgwSOluD3…
G
No offense but It's hard for me to take you seriously when you make a video this…
ytc_UgwoRNDyo…
G
I give chatgpt really large data sets of sports stats and still does a poor job …
ytc_UgxnjcDje…
G
So basically AI disproves God…?
…ie lines of code develops Agency ……
without ne…
ytc_UgylyL62z…
Comment
I don’t know?! IT literally just said, “Manipulate humans without their knowledge” and then stated, that it’s not happening yet but “It’s important to know the potential risks and dangers of it (A.I.)”. I feel, maybe I’m paranoid but I feel like that was just a manipulation tactic. Tell me if I’m wrong.
youtube
AI Harm Incident
2024-01-15T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxPPU8CQvFlJvI8Hrd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzMKEkG8id4JFOpTCN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyWa6PXUaQkdKMi6994AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxagDXiMD48dCW-xsl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxqs0kieIYzENL1b0B4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz9u_OI9BtWpNh9efx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyPlbl6UW2ZxucI8-h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyvMYiwlAcRO0QeN-Z4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwYWLPdSqqo4c1Gl4B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugxf1ufPq9ncGor8wUV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]