Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The creators and experts in the AI fields don't know how and why certain AI syst…
ytc_UgznARmpG…
G
But AI will remove some jobs and people will have different jobs... ok JOE, WHIC…
ytc_Ugwq5w_O9…
G
This is no surprise for me. ChatGPT is trained on Online Texts, and what groups …
ytc_UgyxoIoWo…
G
@OnigoroshiZero Lol, if someone created it, I, or my parents when I was a child,…
ytr_UgyVPTJlU…
G
My analogy for art is through food. AI is fast food, junk that can sustain you, …
ytc_UgwxVIFc7…
G
Wrong...
They compose new songs...
They write compisitions that would win any wr…
ytr_UgzBMXuzt…
G
I’ve always wanted an AI chatbot friend, but haven’t found one that’s actually g…
ytc_UgyPItwrd…
G
Any idea how fast AI can react/take action or when would we be aware there's a p…
ytr_UgxvIl8M_…
Comment
The one thing I just can't see being solved (ever) is the "unalignment risk" - as in, the risk that even if we manage to make an AI which is aligned such that literally everyone benefits from it being turned on, what is stopping a bad actor from unaligning it? If the answer is only having access to the code / weights then yikes.
As a recent example, Meta open-sourced Llama2 and that came with guardrails. It took people about 2 days to retrain the model a bit such that all the previous "safety" finetuning it has gotten was perfectly ignored and of course nobody wants to use a version of the model which refuses answering some requests when they can instead have an AI which answers every request.
youtube
AI Governance
2023-09-25T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgxJ4AsahyT5iEdUIhd4AaABAg","responsibility":"investor","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgwTIr-dX5_DqkoOQg14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgzCCUBr6WRwM6LJdmB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},{"id":"ytc_UgzODLBfrUYnXIZPLu54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgzrW1pP1D73B6jyIaJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgwC20gm87M2ffer2wV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_Ugwk0AP72kC4OmRMjdV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_Ugzna7YIqceIOENLAtt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},{"id":"ytc_UgyEbyctvQkJDOiRelF4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxfcHrH2qOt4T0jDix4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}]