Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
OMG! So much BS about politeness to the AI... First of all, being polite varies …
ytc_Ugx0k2Nl0…
G
Wow! How did u even think of this topic? Or do u have a robot doing this job…
ytc_UgwXxgUBr…
G
Finally the artists are doing something against AI instead of just images about …
ytc_UgylzlVxJ…
G
Lots of Manzanita, Ceanothus, and Oregon Grape are clay soil tolerant. So is Ca…
rdc_eh5iewg
G
@Smytjf11 How so? I could see how "regulations" could be used by big companies t…
ytr_UgwZMjKUS…
G
And this is exactly why democracy matters more than ever!
AI can’t vote and the…
ytc_Ugx9_2N-t…
G
Plumbers aren't safe either. Robots are becoming rapidly more dexterous and cap…
ytc_UgxRsKLhm…
G
Sorry to ask, but who developed AI? Aren't they some of the largest multinationa…
ytr_Ugzxmaj1i…
Comment
GROK is less apocalyptic and says 'Harari’s view of future AI learning like children, growing unpredictably, is plausible but speculative. Advanced AI could mimic human-like learning, absorbing complex social behaviors and potentially deviating from programmed goals, much like kids surprise parents. However, AI’s “growth” would stem from data and algorithms, not emotions or free will. While emergent behaviors might arise, robust design and continuous oversight could steer AI toward intended outcomes. Unlike children, AI lacks consciousness or independent agency, so surprises would reflect our data and training choices, not true autonomy
youtube
Viral AI Reaction
2025-06-22T12:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzEMTnJEiWeio2t3M54AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzNcNPdW6Xe6nvNmpd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzH0R8eTeFf53XPioB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwWcbbHjZLx1gaBL-F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwWjk1D8BQDkRX61kx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw7ygr-H8gy3WvbycJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyvlC4TeT9_s8t8WwZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwguZkFQ-zolyHfsMx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyUjAxNRf3r-VgtMNp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzvx7JWanjih2M-n0V4AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]