Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's scary thinking about how most of us have been on social media posting pictu…
ytc_UgyHngS5q…
G
we have safeguard on making deadly virus or bacteria pathogen because humans are…
ytc_UgwyB9C9m…
G
You are aware that in the USA - Fusion Centers with their Joint Collaboration wi…
ytc_Ugwh-9Et_…
G
What the advent of AI should do is allow people to live their lives how they wan…
ytc_Ugy9HZzXb…
G
AI is a tool, only as good/moral as the person using it. I like the idea of wate…
ytc_UgyGgcQF_…
G
Such an insightful conversation as someone trying to understand the lay of the l…
ytc_Ugx7xhOVB…
G
One cannot ignore the clear intent and strategic prompt engineering involved in …
ytc_UgzmuOoPH…
G
Bittensor $TAO solves all issues of centralized Ai. Study it and thank me later …
ytr_Ugw7NGojg…
Comment
Btw I ultimately don't agree that tools should have more regulations than: explanation of risks and how to avoid them, obligatory before start of using it. In case of some interactions - transparency that: you are talking to bot/AI, that it can be wrong/halluciate and that its creator dont take responsiblity for its use other than it is meant for.
2. As Sam said - you can sue for harm, if they didnt warn you before they should be responsible, just like medical companies for addictions, banks for economical crisises and social media for related issues and profit over safety way of operations.
youtube
AI Governance
2023-06-29T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | contractualist |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugye5P668H0sFEzba1B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxWj33fFsnXXkXy_nZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyxo1aWgTHd3EsFvwp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxX9taOsHS6xjiYKGZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyAzK72rDK1CT1gTQx4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgxMnJeU-xP1KgAC47F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw4yZixOD852d0mnmR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyUoQOOeAOeKDyGxz54AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyubwfKjUUpA6D0VMt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxsdpBH8Gv1qu9Yz654AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]