Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Who here has played with GPT3? If Lambda is roughly equal to GPT3, these language models/AIs are absolutely awesome. But clearly not sentient. Of course I can only comment on GPT3 per se. Essentially these AIs are so good at predicting the next word to say given a context that they generate entire paragraphs in context and thus can follow commands and answer questions and conduct dialog. Because the context statistically governs the next word it can conjure up not just a linguistically sensible word but a word that’s part of the next sentence and concept it’s neural net dreams up. It’s because the neural nets are SOOOO deep, they are predicting plots and inferences and dialog. All from only training on predicting the next word! I love GPT3. I think this guy is very likely overstating things but we’ll see. Back around 2017 I predicted human-like text AI by 2024. I stand by that prediction. It all depends on if people continue to try to. It looks like they are.
youtube AI Moral Status 2022-06-28T10:4… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_Ugz7Socjqo1ySy4okBB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzTl8yiNOHhKE1WjRt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzRVqPbMIT8y61CYbB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwwy6zPqa1CJ8Fm1ah4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz6YbpnMXPkZLrL2LN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}]