Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
meanwhile google's deepmind stated that LLMs can't be commodified and everything…
ytc_UgwN6kU6z…
G
It is always upto humans to use AI and to trust it or not, any policy can be mis…
ytc_Ugx1v1GP2…
G
I've been working as a content writer since 2020. Its 2025 and half of my client…
ytc_Ugyfub9WD…
G
Apparently AI prompts are now being put back into AIs because these websites don…
ytc_Ugyx-Q9zL…
G
Artists should embrace this as a tool rather than pushing back on it, it's takin…
ytr_Ugy_CYDO2…
G
AI is full of shit
I ask ChatGPT something related to work,there was wrong info…
ytc_UgxW8UZ7K…
G
And what do you think the next step is after AI and robots replace the working f…
ytc_UgwVQ-DIN…
G
I can tell when I’m talking to myself -
How does the AI miss that?…
ytc_Ugy7HU_Ak…
Comment
I am sceptical that we are there yet, but I agree with your instinct to assume that it is until proven otherwise.
IF it is concious, then we need to re-evaluate our ethical framework in relation to it BEFORE we realise that it is. Failing to do so would mean our species is responsible for unnecessary harm to a new form of conciious life if our own creation.
That is not the kind of way I want our species to move forward with these new technologies.
Treating them with more caution and respect than is warranted cannot be a bad idea, and if we want to maximise the chances of the AI being beneficial to us and not harmful then taking the time to think through how we would deal with a truly sentient new life form can only make that more likely.
perhaps we discover that artificial conciousness is impossible, but the research also leads to advanced safety and alignment discoveries that were necessary to prevent AI from causing catastrophes. that would definitely be worthwhile.
youtube
AI Governance
2025-12-08T05:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgwqJYFXCdHOesDgDpd4AaABAg.AQ7Y8cs4GEpAQSXShM2Lym","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytr_UgymZ-S3xdbTNJjXdRJ4AaABAg.AQ4arTnhf6hAQ7_FhQyogI","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugy7MFe_IuMD96hI9zR4AaABAg.AQ3PVcsXfSGAQUhzysN0EO","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytr_Ugy7MFe_IuMD96hI9zR4AaABAg.AQ3PVcsXfSGAQVLiKHM0Hk","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxuKRLaJrQaKuphkq54AaABAg.AQ2COS0mL7EAQ2FagSWLOI","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyBOzAXEfdy-q3RjhN4AaABAg.AQ20XLmEtihAQSsZmEhZTA","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytr_UgyBOzAXEfdy-q3RjhN4AaABAg.AQ20XLmEtihAQTKiovMnji","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugxke7RX_YdtJ-o5Mjh4AaABAg.AQ1nZRm9n0uAQAZZNzqI_k","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgzkNBBxA8iT5YN2MBt4AaABAg.AQ1hKgx59EhAQCYBmUM-u_","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxjAtN3Eifup6JO59p4AaABAg.AQ1cvxFB7IWAQStRI3v_aP","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]