Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
same bro, I had been using AI since a long time even before chat gpt came out I …
ytr_Ugxg1-ZFR…
G
I wonder what the family thinks of the price paid for testing this self-driving …
ytc_UgzPVQC7Z…
G
As someone who’s used ai to make porn I can say it likes white people more than …
ytc_UgxY3qZcF…
G
Thank you very much for educating and sharing this sad story. 🙏
Nobody is immun…
ytc_Ugzr3Q3a4…
G
Nexus is powerfyl. I use AI very happy with it. It has become my professor.
Ai …
ytc_UgyoB85gY…
G
Thats why the ai will be used as a tool and not the first option, humans will st…
ytr_UgwTdP6re…
G
This is part of what I hate about AI. I don't want to talk to it like a person, …
ytc_UgzjNF03J…
G
And everything went really well, seemingly intelligent older guy, with lots of e…
ytc_UgzB14j7P…
Comment
In the end it all goes to fundamental problem of missuse of technology. So the preposition is that super-intelligent AI will exist. What do we mean by that? Well, a form of interactable intelligence that performs better then humans on most, if not all, predefined human benchmarks. But what is not stated here is what system designed inputs and outputs it will have and will it be completely automated in this setting. And that is the core problem - even todays models can be pipelined into systems for performing any danger level tasks, but nobody is doing it because of that risk factor and potential damages. If risks do not meet required levels of reliability, the maximum this systems can be used for is recommendation to human agent, and even then the usage of such systems is questionable from every perspective. But it is possible.
AI itself is not a threat, but the setting of it's use might be, and this can and should be discussed today.
youtube
AI Governance
2023-07-07T16:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugxs32GfFAuVJqXtsER4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyP32EFA3Y5ktq3NCR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxpm-nkEA4Jlj1DWUZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx3O-lecstqLqiaL5N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx5LT0M-B6vvyirP9Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgywSt7QVnzDLLJwsnZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzR72iHwgV5RJqN_6F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx1ltpClDN2cUZQHmJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwB-pGM8x1G4L7K-sB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugymf1lykKqLfaW0dVN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}]