[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["必要な情報がない","missingTheInformationINeed","thumb-down"],["複雑すぎる / 手順が多すぎる","tooComplicatedTooManySteps","thumb-down"],["最新ではない","outOfDate","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["サンプル / コードに問題がある","samplesCodeIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-07-27 UTC。"],[[["\u003cp\u003eThis module explores language models, which estimate the probability of a token or sequence of tokens occurring within a longer sequence, enabling tasks like text generation, translation, and summarization.\u003c/p\u003e\n"],["\u003cp\u003eLanguage models utilize context, the surrounding information of a target token, to enhance prediction accuracy, with recurrent neural networks offering more context than traditional N-grams.\u003c/p\u003e\n"],["\u003cp\u003eN-grams are ordered sequences of words used to build language models, with longer N-grams providing more context but potentially encountering sparsity issues.\u003c/p\u003e\n"],["\u003cp\u003eTokens, the atomic units of language modeling, represent words, subwords, or characters and are crucial for understanding and processing language.\u003c/p\u003e\n"],["\u003cp\u003eWhile recurrent neural networks improve context understanding compared to N-grams, they have limitations, paving the way for the emergence of large language models that evaluate the whole context simultaneously.\u003c/p\u003e\n"]]],[],null,[]]