Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A lot of BullSH#@T! We don't have cientific knowlege to create a real life AI. M…
ytc_UgyP7mFwG…
G
Could we be any more naive? The end has officially begun with this AI shit. Holy…
ytc_Ugxw24pcq…
G
Here's the truth
1: we're not obligated to support real artists
2:AI is a tool …
ytc_UgxDBlgku…
G
A question for conversation: If AI replaces the workforce, how will companies ma…
ytc_UgwsPT95u…
G
This whole argument is stupid. AI rips pieces of artwork apart and fuses them to…
ytc_Ugy7LBxy8…
G
World is rin by Narcissistic Psychopaths - AI will allow these creatures to rem…
ytc_UgxPB-L14…
G
I tell people all the time. AI can be use for good or bad. But it is an avenue d…
ytc_UgwKvC2qp…
G
The woman robot was trying to keep the male robot shut up and acting normal. Th…
ytc_UgxwVpCCZ…
Comment
okay but are you sure these LLMs are actually able to THINK ? intelligence, or wisdom, is attributed to humans with a soul and conscience given by God. LLM is just an algorithm using an enormous amount of data. Text is generated using probability. It's not possible it can THINK. Proof : lack of creativity, unable to actually solve simple problems like the Hanoi Towers. Force it to invent some new physics LAW. It will end up with really funny stuff. So I am not sure "AI" can take over humanity. The other problem is the enormous amount of resources it already eats up. VERY expensive, little refund. Might burst as a bubble because money or resources are over.
youtube
AI Governance
2025-09-24T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxx4dkP5nBhu4hjwml4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzobAlQ9uwOOkzb6il4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyH7UkXd3lnzjVjRQ54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz6_CG0xCIviT2X8-l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzoqMN0e1ieBbH9SSN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugysed2MsSoZ1qeE-d14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwCqtdoo1LYXz6_8R94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxll1b3jtiliPwzFFd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxs0Tz5qCDx17iBCjF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwId7DjKhsn7mDVbst4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}
]