Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hey please tell me that can I learn this artificial intelligence course just aft…
ytc_UgxIg2sqF…
G
The thing that throws me off (for an llm) is the changing of tones and inflectio…
ytc_Ugyvhs3pR…
G
AI only has specific uses and in some cases what I call bullshit jobs. Where it …
ytc_UgwCw3vZk…
G
It depends on the limits we are able to put in place and whether the AI could by…
ytc_UgzXs3hG9…
G
I always start a conversation with chatGPT being polite, but after a dozen of it…
ytc_UgwdrIkpP…
G
AI job replacements are not fully implemented yet to reflect such layoffs, espec…
ytc_Ugx2FtH_l…
G
_"A broken clock is right twice a day."_
Sure A.I. can help even in its current…
ytc_UgwbRcobR…
G
Yes, she seems to be a great stateswoman and we need her, especially to oppose t…
rdc_ni1456g
Comment
@1:20:06 - “The public interest of these technologies is at the core. And if it is going to be using our land, our air, our energy, and our water, then we need to have a say in it. And these tools should be used for the public good. And that actually means making this something that serves all of us, not the few.” [Read also: EMPIRE OF AI, AI SNAKE OIL, and all of the books by forensic epidemiologist Harriet A. Washington ]
@1:20:35 - Calliston-Burch: “There (are) Isaac Asimov’s Laws of Robotics,(as follows):
1. A robot must do no harm to an individual, including through inaction.
2. A robot must obey what the human says
3. The robot must not do harm to itself, barring laws 1 and 2.
And the later there was a Zero-th law, which is, the robots cannot harm humanity.
[Read also: JUSTICE FOR SOME, THE NEW JIM CROW, and THE COLOR OF LAW to see how laws have been and are being used to harm humanity.]
It’s 3.28.2026 and this presentation is obsolete.
youtube
AI Governance
2026-03-28T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzWhMQf-fWyYRC0RQN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwFNcr-iTYJNlYWxNV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw7YGiuUMNoGVywD7V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgycwsUmKqm4OczGMxN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzXV3bNuWnww8U2-KN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwtvB9lp3PCyGgud7Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx_Ev-M9lkiBtAXFyF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgylRrXtZbD14m8WgSt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwbpMwdbQuOvKCPFDR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxEBlpi5snRAYIlw6V4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}
]