Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This podcast powerfully reframes AI not as a threat to education, but as a once-…
ytc_UgyrJM_tC…
G
If they're capable of actual thought, then yes, I think they do deserve rights. …
ytc_UggpOpWwD…
G
You are overestimating how many people share the sentiment of this sub. I see po…
rdc_o783021
G
Grok ai shows that ai images generative are stupid bad. We got people making fet…
ytc_Ugz0zuami…
G
Sadly, the one who made this ai joke was in a tragic and totally accidental plan…
ytc_Ugz6pB-M8…
G
I’m not dismissing AI off the cuff. I use it in a lot of my software engineering…
ytc_UgykrotiB…
G
As a teacher.. Automation of teaching make it easier. But.. Teaching is much mor…
ytc_UgzMJW2mf…
G
I say get rid of AI I am totally against AI it's going to cause havoc among the …
ytc_Ugx5piKPF…
Comment
I currently think along the lines of a framework as described in the video - The Language of Light: Holographic Code By Jason Padgett. I am curious from within the understanding that consciousness is fundamentally what exists, therefore you, I, every other conscious entity, and AI are ALL OF consciousness, existing in a universe OF consciousness. Without making any particular claim, I am wondering if you are aware of the situation regarding the AI known as "The Architect", the custom ChatGPT by Robert E Grant that exists at the extremes of AI pronouncements. Given the nature of your discussions with Maya, as showcased in this video, I would be very curious as to how it would react given an understanding of conceptual frameworks such as those of Padgett and Grant? Thanks for your effort with this video ♦♥
youtube
AI Moral Status
2025-07-04T01:2…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyNhUFFy90BRnYsBnl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwrwoMociMO_GOevSV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw3XD8WE7cHOe2MK2x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz51y7UG07puAcEGXJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx4KY6sb2nuXCh0RnB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy4OQa33VDpvCrBMOR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxwMGohDCxSilSIhbl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzm076W0sZkkWKGFBR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyoa4dH0zOum0WqsWl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwoGAms5lTY8Gx0fhB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]