{"id":3341,"date":"2025-09-24T20:59:48","date_gmt":"2025-09-24T12:59:48","guid":{"rendered":"https:\/\/moresourcing.com\/how-llms-work\/"},"modified":"2025-09-24T20:59:48","modified_gmt":"2025-09-24T12:59:48","slug":"how-llms-work","status":"publish","type":"post","link":"https:\/\/moresourcing.com\/es\/how-llms-work\/","title":{"rendered":"How LLMs Work: Top 10 Executive-Level Questions"},"content":{"rendered":"<p><\/p>\n<div>\n<div class=\"article-left-col\">\n<section class=\"article-topics\">\n<h4 class=\"article-topics__title\">Temas<\/h4>\n<ul class=\"article-topics__list\">\n<li class=\"article-topics__item\">\n                <a href=\"https:\/\/sloanreview.mit.edu\/topic\/data-ai-machine-learning\/\">Data, AI, &amp; Machine Learning<\/a>\n            <\/li>\n<li class=\"article-topics__item\">\n                <a href=\"https:\/\/sloanreview.mit.edu\/topic\/ai-machine-learning\/\">AI &amp; Machine Learning<\/a>\n            <\/li>\n<\/ul>\n<\/section>\n<section class=\"article-section\">\n<h4 class=\"article-section__title\">Columna<\/h4>\n<p>\n            Nuestros columnistas expertos ofrecen opiniones y an\u00e1lisis sobre temas importantes a los que se enfrentan las empresas y los directivos modernos.        <\/p>\n<p>        <a href=\"https:\/\/sloanreview.mit.edu\/series\/column\/\" class=\"article-section__link\"><\/p>\n<p>           M\u00e1s de esta serie<br \/>\n                      <\/a><\/p>\n<\/section><\/div>\n<aside class=\"article-ad ad-300  ad-300x250 ad-desktop\">\n<\/aside>\n<aside class=\"article-ad ad-300  ad-300x250 ad-mobile\">\n<\/aside>\n<figure class=\"article-inline\">\n<img fetchpriority=\"high\" decoding=\"async\" width=\"1290\" height=\"860\" class=\"wp-image-122908\" srcset=\"https:\/\/moresourcing.com\/wp-content\/uploads\/2025\/09\/How-LLMs-Work-Top-10-Executive-Level-Questions.jpg 1290w, https:\/\/sloanreview.mit.edu\/wp-content\/uploads\/2025\/09\/Ramakrishnan-Questions-1290x860-1-300x200.jpg 300w, https:\/\/sloanreview.mit.edu\/wp-content\/uploads\/2025\/09\/Ramakrishnan-Questions-1290x860-1-150x100.jpg 150w, https:\/\/sloanreview.mit.edu\/wp-content\/uploads\/2025\/09\/Ramakrishnan-Questions-1290x860-1-768x512.jpg 768w, https:\/\/sloanreview.mit.edu\/wp-content\/uploads\/2025\/09\/Ramakrishnan-Questions-1290x860-1-764x509.jpg 764w, https:\/\/sloanreview.mit.edu\/wp-content\/uploads\/2025\/09\/Ramakrishnan-Questions-1290x860-1-382x255.jpg 382w, https:\/\/sloanreview.mit.edu\/wp-content\/uploads\/2025\/09\/Ramakrishnan-Questions-1290x860-1-870x580.jpg 870w, https:\/\/sloanreview.mit.edu\/wp-content\/uploads\/2025\/09\/Ramakrishnan-Questions-1290x860-1-435x290.jpg 435w\" data-lazy-sizes=\"(max-width: 1290px) 100vw, 1290px\" src=\"https:\/\/moresourcing.com\/wp-content\/uploads\/2025\/09\/How-LLMs-Work-Top-10-Executive-Level-Questions.jpg\"\/><img fetchpriority=\"high\" decoding=\"async\" width=\"1290\" height=\"860\" src=\"https:\/\/moresourcing.com\/wp-content\/uploads\/2025\/09\/How-LLMs-Work-Top-10-Executive-Level-Questions.jpg\" class=\"wp-image-122908\" srcset=\"https:\/\/moresourcing.com\/wp-content\/uploads\/2025\/09\/How-LLMs-Work-Top-10-Executive-Level-Questions.jpg 1290w, https:\/\/sloanreview.mit.edu\/wp-content\/uploads\/2025\/09\/Ramakrishnan-Questions-1290x860-1-300x200.jpg 300w, https:\/\/sloanreview.mit.edu\/wp-content\/uploads\/2025\/09\/Ramakrishnan-Questions-1290x860-1-150x100.jpg 150w, https:\/\/sloanreview.mit.edu\/wp-content\/uploads\/2025\/09\/Ramakrishnan-Questions-1290x860-1-768x512.jpg 768w, https:\/\/sloanreview.mit.edu\/wp-content\/uploads\/2025\/09\/Ramakrishnan-Questions-1290x860-1-764x509.jpg 764w, https:\/\/sloanreview.mit.edu\/wp-content\/uploads\/2025\/09\/Ramakrishnan-Questions-1290x860-1-382x255.jpg 382w, https:\/\/sloanreview.mit.edu\/wp-content\/uploads\/2025\/09\/Ramakrishnan-Questions-1290x860-1-870x580.jpg 870w, https:\/\/sloanreview.mit.edu\/wp-content\/uploads\/2025\/09\/Ramakrishnan-Questions-1290x860-1-435x290.jpg 435w\" sizes=\"(max-width: 1290px) 100vw, 1290px\"\/><figcaption>\n<p class=\"attribution\">Carolyn Geason-Beissel\/MIT SMR | Getty Images<\/p>\n<\/figcaption><\/figure>\n<div class=\"article-summary\"><strong class=\"article-summary__strong\">Resumen: <\/strong><\/p>\n<p>Business leaders are now rethinking workflows, organizational design, and other disciplines as their companies embrace artificial intelligence and generative AI tools. To make good decisions about the use of AI, these leaders need to grasp key aspects of the capabilities and limitations of the large language models that underlie the technology. Here are 10 of the most frequently asked questions about LLMs, along with answers that clarify aspects of GenAI that are often poorly understood.<\/p>\n<\/div>\n<p><span class=\"smr-leadin\">In my work<\/span> at MIT Sloan School of Management, I have taught the basics of how large language models (LLMs) work to many executives during the past two years. <\/p>\n<p>Some people posit that business leaders neither want to nor need to know how LLMs and the generative AI tools that they power work \u2014 and are interested only in the <em>results<\/em> the tools can deliver. That is not my experience. Forward-thinking leaders care about results, of course, but they are also keenly aware that a clear and accurate mental model of how LLMs work is a necessary foundation for making sound business decisions regarding the use of AI technologies in the enterprise. <\/p>\n<p>In this column, I share questions on 10 often-misunderstood topics that I am often asked about, along with their answers. You don\u2019t need to read a book on each one of these topics, nor do you have to get into the technical weeds, but you do need to understand the essentials. Consider this list a useful reference for yourself and for your teams, colleagues, or customers the next time one of these questions comes up in a discussion with them. I have heard from my executive-level students at MIT that this knowledge is especially helpful as a reality check in conversations with technology partners.<\/p>\n<h3>10 Essential Questions and Answers on GenAI and LLMs<\/h3>\n<h4>1. I understand that LLMs generate output one piece of text at a time. How does the LLM \u201cdecide\u201d when to stop?<\/h4>\n<p>Put another way, when does the LLM decide to give the user the final answer to a question? The decision to stop generating is determined by a combination of what the LLM predicts and the rules set by the software system running it. It is not a choice made by the LLM alone. Let\u2019s examine in detail how this works.<\/p>\n<p>When an LLM answers a question, it produces text one small piece at a time. The technical name for a piece is <em>token<\/em>.<a id=\"reflink1\" class=\"reflink\" href=\"#ref1\">1<\/a> Tokens can be words or parts of words. At each step, the LLM predicts which token should come next based on the prompt<em> y <\/em>what it has already written so far.<a id=\"reflink2\" class=\"reflink\" href=\"#ref2\">2<\/a><\/p>\n<p>An external system runs the LLM in a \u201cgenerate the next token; append it to the input; generate the next token\u201d loop until a <em>stopping condition<\/em> is triggered. When this happens, the system stops asking the LLM for more tokens and shows the result to the user.<\/p>\n<p>Many stopping conditions are used in practice. An important one involves a special \u201cend of sequence\u201d token that (informally) means \u201cend of answer.\u201d This token is used in the training process to denote the end of individual training examples and so, during training, the LLM learns to predict this special token at the point where its answer is complete. Other stopping conditions include (but are not limited to) a limit on the maximum number of tokens that have been generated so far, or the generation of a user-defined pattern called a <em>stop sequence<\/em>.<\/p>\n<p>When we use the web version of a tool like ChatGPT as consumers, we don\u2019t see this process \u2014 only the finished text. But when your organization starts building its own LLM apps, developers can adjust these stopping rules and other parameters themselves, and these choices can affect answer completeness, cost, and formatting.<\/p>\n<p>The important point here is that the \u201cdecision\u201d to stop is an interaction between the LLM\u2019s token predictions and external control logic, not a decision made by the LLM.<\/p>\n<h4>2. If the LLM makes a mistake and I correct it, will it update itself immediately?<\/h4>\n<p>No, the LLM will not update itself immediately if you correct it. If you are using tools like ChatGPT or Claude, your correction might help improve future versions of the model if your chat history is included in a future training run, but those updates happen over weeks or months, not instantly. <\/p>\n<p>Some apps, such as ChatGPT, have a memory feature that can update in real time to remember personal information like your name, preferences, or location. However, this memory is used for personalization and does not appear to be used for correcting the model\u2019s factual knowledge or reasoning errors.<\/p>\n<h4>3. If the LLM repeatedly generates one token at a time based on the current conversation, why have I seen it use information from a prior conversation (say, from a week ago) in the response?<\/h4>\n<p>LLMs generate responses one token at a time, based on the input they are given in that conversation. By default, they don\u2019t use past conversations. However, as noted in the response above, some LLM applications have a memory feature that lets them store information from earlier chats \u2014 such as your name, interests, preferences, ongoing projects, or frequently queried topics.<\/p>\n<p>When you start a new chat, relevant pieces of this stored memory may be <em>automatically<\/em> added to the prompt <em>behind the scenes<\/em>. This means that the model is not actually recalling past chats in real time; instead, it is being fed reminders of that information as part of the input. That\u2019s how it can appear to \u201cremember\u201d things from a week ago.<\/p>\n<p>The details of what is stored and when it is used vary by vendor, and the exact methods haven\u2019t been disclosed. It is possible that a technique like retrieval-augmented generation (RAG) is being used to decide which memory items to include in a new prompt. Many platforms allow users to view, edit, or turn off memory entirely. In the ChatGPT app, for example, this can be accessed via Settings &gt; Personalization.<\/p>\n<p>RAG, if you are not familiar, is a technique used to provide the LLM with access to a specific set of proprietary data. This helps the LLM provide helpful responses.<\/p>\n<h4>4. I understand that LLMs have a training cutoff date, and they don\u2019t \u201cknow\u201d about things that happened after that date. However, they <em>can<\/em> answer questions about events that happened <em>after<\/em> the cutoff date. How does this work?<\/h4>\n<p>When you ask a question about something that happened after an LLM\u2019s training cutoff date, the model itself doesn\u2019t \u201cknow\u201d about the event unless it has access to up-to-date information. Some systems \u2014 like ChatGPT with browsing enabled \u2014 can perform live web searches to help answer such questions. <\/p>\n<div class=\"callout-pullquote callout-pullquote--no-quote callout-pullquote--long\" data-aos-duration=\"900\" data-aos-anchor-placement=\"bottom-bottom\" data-aos-easing=\"ease-out-back\" data-aos=\"fade-up\">\n<p class=\"callout-pullquote__quote\">\n\t\t\t\t\tWithout access to live data, a model might still generate an answer based on its training data that doesn\u2019t reflect real-world updates.\n\t\t\t\t\t<\/p>\n<\/div>\n<p>In those cases, the LLM may generate a search query based on your question, and a separate part of the system (outside the model itself) carries out the search. The results are then sent back to the LLM so that it can generate an answer based on that fresh information. Not all LLMs or applications have this capability, though. Without access to live data, a model might still generate an answer based on its training data, which doesn\u2019t reflect real-world updates.<\/p>\n<aside class=\"article-ad ad-300  ad-300x600 ad-desktop\">\n<\/aside>\n<aside class=\"article-ad ad-300  ad-300x250 ad-mobile\">\n<\/aside>\n<h4>5. If I include documents as part of a prompt, can I ensure that the LLM uses only the provided documents when it generates the response? For example, if I upload a corporate expense policy document and ask a question, can I ensure that it uses only this document and not policy documents found on the web that it happened to be trained on?<\/h4>\n<p>No. While careful prompting and techniques like RAG can encourage an AI model to prioritize a set of provided documents, standard LLMs cannot be forced to use only that content. The model still has access to patterns and facts it learned during training and may blend that knowledge into its response \u2014 especially if the training data included similar content. <\/p>\n<h4>6. LLMs sometimes cite the sources that were used to generate the answer to a question. If an answer comes with supporting citations, can I trust it?<\/h4>\n<p>No. LLMs can fabricate (hallucinate) citations or use real sources in inaccurate or misleading ways. Some LLM systems include post-processing steps to verify citations, but these checks are not always reliable or comprehensive. Always verify that a cited source actually exists and that its content genuinely supports the information in the response.<\/p>\n<h4>7. When we have many documents, we use RAG, where we first gather relevant information from documents and include only those in the prompt. But modern LLMs have long context windows, and we can easily include <em>all<\/em> the documents. Is RAG even necessary?<\/h4>\n<p>Modern LLMs like GPT-4.1 and Gemini 2.5 offer million-token context windows \u2014 enough to hold entire books. This naturally raises the question: If we can fit everything in, why bother using a subset?<\/p>\n<p>While these extended context windows are powerful, including all documents in the prompt isn\u2019t always a good idea. There are several reasons why RAG still matters.<\/p>\n<p>First, RAG isn\u2019t just about keeping the prompt short. It\u2019s about selecting the most relevant parts of the documents. <a href=\"https:\/\/www.dbreunig.com\/2025\/06\/22\/how-contexts-fail-and-how-to-fix-them.html\" target=\"_blank\" rel=\"noopener\">Overloading the context<\/a> with too much or irrelevant information can hurt performance, and keeping the context and prompt relevant, concise, and accurate often leads to better answers.<\/p>\n<p>Second, even though LLMs can accept long contexts, they don\u2019t process all parts equally well. Research has shown that AI models tend to focus more on the beginning and end of a prompt and may miss important information in the middle.<\/p>\n<p>Finally, longer prompts mean more tokens, which increases API costs and slows down responses. This matters in real-world applications where cost and speed are important.<\/p>\n<p>In short, long context windows are useful, but they don\u2019t make retrieval obsolete. RAG remains an important tool, especially when you care about accuracy, efficiency, or cost. The option to use RAG should still be <a href=\"https:\/\/sloanreview.mit.edu\/article\/the-genai-app-step-youre-skimping-on-evaluations\/\">evaluated<\/a> based on the needs of your specific application.<\/p>\n<h4>8. Can LLM hallucinations be eliminated?<\/h4>\n<p>No, hallucinations cannot be fully eliminated with current LLM technology. They arise from the probabilistic nature of language models, which generate text by predicting likely token sequences based on training data \u2014 not by verifying facts against a reliable source.<\/p>\n<p>However, careful prompt engineering and strategies such as RAG, fine-tuning on domain-specific data, and post-processing with rule-based checks or external validation can <a href=\"https:\/\/docs.anthropic.com\/en\/docs\/test-and-evaluate\/strengthen-guardrails\/reduce-hallucinations\" target=\"_blank\">reduce hallucinations<\/a> in specific use cases.<a id=\"reflink3\" class=\"reflink\" href=\"#ref3\">3<\/a> While these strategies don\u2019t guarantee the elimination of hallucinations, they can improve an LLM\u2019s reliability enough for many practical applications.<\/p>\n<h4>9. Since LLM hallucinations and mistakes cannot be eliminated, we need to check the answers. How can we do this efficiently?<\/h4>\n<p>Efficiently checking LLM outputs depends on the type of task and the acceptable level of risk. Broadly, the main strategies include human review and automated methods.<\/p>\n<p>For open-ended tasks such as summaries, essays, reports, or analyses, human review provides the most reliable oversight. However, this is costly and difficult to scale, especially in scenarios that require fast or real-time responses. One way to improve efficiency here is to review only a subset of outputs (in other words, employ sampling) or triage based on risk, focusing human attention on the critical cases.<\/p>\n<p>An increasingly popular alternative is to use an \u201cAI judge,\u201d which is typically another LLM that can evaluate or verify the outputs of the first tool. This approach allows for scalable and fast accuracy-checking, but it comes with limitations: The judge itself may hallucinate or fail to match human judgment, particularly in complex cases. Some improvements include using multiple judges for comparison, combining judge feedback with retrieval-based fact-checking, or designing workflows where low-confidence outputs are escalated to humans. <\/p>\n<div class=\"callout-pullquote callout-pullquote--no-quote\" data-aos-duration=\"900\" data-aos-anchor-placement=\"bottom-bottom\" data-aos-easing=\"ease-out-back\" data-aos=\"fade-new-left\">\n<p class=\"callout-pullquote__quote\">\n\t\t\t\t\tAn \u201cAI judge\u201d is typically another LLM used to evaluate or verify the outputs of the first tool.\n\t\t\t\t\t<\/p>\n<\/div>\n<p>Structured tasks, such as generating code, classifying information, or producing structured data in formats like SQL or JSON, lend themselves more readily to automation. Generated code can be tested automatically with unit tests or run in a sandbox environment. Classification outputs can be checked to ensure that they fall within predefined categories. Structured formats like JSON, SQL, or XML can be automatically checked for syntactic validity, though this only ensures correct formatting \u2014 not the accuracy of the content itself. <\/p>\n<p>In summary, the most efficient checking strategies combine automation and human oversight. Automated tools provide speed and scale, and humans provide reliability. By blending these methods and using risk-aware triaging, organizations can achieve a reasonable balance between quality assurance and efficiency.<\/p>\n<h4>10. We are building an LLM-based chatbot and would like to guarantee that its answer to a question stays unchanged when different users ask that same question (or one user asks the same question at different times). Is this possible?<\/h4>\n<p>If by \u201cguarantee\u201d you mean <em>exactly<\/em> the same wording every time, the short answer is no.<\/p>\n<p>If the same question is posed on different occasions using different words, the LLM\u2019s answers will very likely change. But even if the exact same question statement is used, it\u2019s almost impossible to guarantee that <em>exactly<\/em> the same answer will be generated every single time.<\/p>\n<p>You can reduce variability by configuring certain LLM settings (for example, setting \u201ctemperature\u201d to zero), locking the exact model version, and even self-hosting so you control the entire hardware and software stack. But even then, technical factors make it exceedingly difficult to eliminate all variation in real-world production environments.<a id=\"reflink4\" class=\"reflink\" href=\"#ref4\">4<\/a> Thus, you\u2019ll still occasionally see small wording or emphasis shifts that don\u2019t change the meaning of the underlying answer. Note that this may be adequate if you mainly care about the meaning of the answers rather than their exact wording. <\/p>\n<p>The only way to truly guarantee identical wording is to store (cache) the answer the first time it is generated and serve that stored text whenever the same question is detected. This approach works well if your repeat detection is perfect, but in practice, reworded or slightly altered questions may bypass the cache and trigger LLM regeneration \u2014 which can produce a different answer.<\/p>\n<p>In short: You can make answers extremely consistent, but a 100% wording guarantee is not achievable with current technology.<\/p>\n<aside class=\"article-ad ad-300  ad-300x250 ad-desktop\">\n<\/aside>\n<aside class=\"article-ad ad-300  ad-300x250 ad-mobile\">\n<\/aside>\n<div class=\"article-authors\" id=\"article-authors\">\n<h4 class=\"article-authors__title\">Sobre el autor<\/h4>\n<div class=\"article-authors__bio\">\n<p>Rama Ramakrishnan is a professor of the practice at the MIT Sloan School of Management.<\/p>\n<\/div><\/div>\n<div class=\"article-ref\" id=\"article-ref\">\n<h4 class=\"article-ref__title\">Referencias<\/h4>\n<div class=\"article-ref__list\">\n<p id=\"ref1\"><b>1.<\/b> On average, a token is about three-fourths of a word, and modern LLMs have a vocabulary of tens of thousands to over 100,000 tokens. You can enter different questions into <a href=\"https:\/\/platform.openai.com\/tokenizer\" target=\"_blank\">OpenAI\u2019s Tokenizer tool<\/a> and see how a word is tokenized to gain a deeper understanding.<\/p>\n<p id=\"ref2\"><b>2.<\/b> Strictly speaking, given an input, the LLM generates a probability (that is, a number between 0.0 and 1.0) for each token in its vocabulary. You can think of the probability for a token as a measure of its suitability to be the next token. Across all the tokens in the vocabulary, the probabilities add up to 1.0. The next token is selected based on these probabilities using a variety of developer-controllable strategies (such as picking the token with the highest probability or selecting a token randomly in proportion to its probability).<\/p>\n<p id=\"ref3\"><b>3.<\/b> For a recent survey of academic research on this topic, see Y. Wang, M. Wang, M.A. Manzoor, et al., \u201c<a href=\"https:\/\/aclanthology.org\/2024.emnlp-main.1088.pdf\" target=\"_blank\">Factuality of Large Language Models: A Survey<\/a>,\u201d in \u201cProceedings of the 2024 Conference on Empirical Methods in Natural Language Processing\u201d (Miami: Association for Computational Linguistics, Nov. 12-16, 2024), 19519-19529.<\/p>\n<p id=\"ref4\"><b>4.<\/b> To name a few: nondeterministic GPU operations, floating-point rounding differences, and silent back-end updates.<\/p>\n<\/p><\/div>\n<\/p><\/div>\n<\/div>\n<p>#LLMs #Work #Top #ExecutiveLevel #Questions<\/p>","protected":false},"excerpt":{"rendered":"<p>Topics Data, AI, &amp; Machine Learning AI &amp; Machine Learning Column Our expert columnists offer opinion and analysis on important issues facing modern businesses and managers. More in this series Carolyn Geason-Beissel\/MIT SMR | Getty Images Summary: Business leaders are now rethinking workflows, organizational design, and other disciplines as their companies embrace artificial intelligence and [&hellip;]<\/p>","protected":false},"author":1,"featured_media":3342,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[9],"tags":[],"class_list":["post-3341","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-management"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.7.1 (Yoast SEO v25.8) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>How LLMs Work: Top 10 Executive-Level Questions - MORE SOURCING LTD<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/moresourcing.com\/es\/how-llms-work\/\" \/>\n<meta property=\"og:locale\" content=\"es_ES\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"How LLMs Work: Top 10 Executive-Level Questions\" \/>\n<meta property=\"og:description\" content=\"Topics Data, AI, &amp; Machine Learning AI &amp; Machine Learning Column Our expert columnists offer opinion and analysis on important issues facing modern businesses and managers. More in this series Carolyn Geason-Beissel\/MIT SMR | Getty Images Summary: Business leaders are now rethinking workflows, organizational design, and other disciplines as their companies embrace artificial intelligence and [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/moresourcing.com\/es\/how-llms-work\/\" \/>\n<meta property=\"og:site_name\" content=\"MORE SOURCING LTD\" \/>\n<meta property=\"article:published_time\" content=\"2025-09-24T12:59:48+00:00\" \/>\n<meta name=\"author\" content=\"MS\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"MS\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tiempo de lectura\" \/>\n\t<meta name=\"twitter:data2\" content=\"13 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/moresourcing.com\/how-llms-work\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/moresourcing.com\/how-llms-work\/\"},\"author\":{\"name\":\"MS\",\"@id\":\"https:\/\/moresourcing.com\/#\/schema\/person\/2c9a233f0ad18413717419291cacdf69\"},\"headline\":\"How LLMs Work: Top 10 Executive-Level Questions\",\"datePublished\":\"2025-09-24T12:59:48+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/moresourcing.com\/how-llms-work\/\"},\"wordCount\":2639,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/moresourcing.com\/#organization\"},\"image\":{\"@id\":\"https:\/\/moresourcing.com\/how-llms-work\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/moresourcing.com\/wp-content\/uploads\/2025\/09\/Ramakrishnan-Questions-2400x1260-1-1200x630.jpg\",\"articleSection\":[\"Management\"],\"inLanguage\":\"es\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/moresourcing.com\/how-llms-work\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/moresourcing.com\/how-llms-work\/\",\"url\":\"https:\/\/moresourcing.com\/how-llms-work\/\",\"name\":\"How LLMs Work: Top 10 Executive-Level Questions - MORE SOURCING LTD\",\"isPartOf\":{\"@id\":\"https:\/\/moresourcing.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/moresourcing.com\/how-llms-work\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/moresourcing.com\/how-llms-work\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/moresourcing.com\/wp-content\/uploads\/2025\/09\/Ramakrishnan-Questions-2400x1260-1-1200x630.jpg\",\"datePublished\":\"2025-09-24T12:59:48+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/moresourcing.com\/how-llms-work\/#breadcrumb\"},\"inLanguage\":\"es\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/moresourcing.com\/how-llms-work\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\/\/moresourcing.com\/how-llms-work\/#primaryimage\",\"url\":\"https:\/\/moresourcing.com\/wp-content\/uploads\/2025\/09\/Ramakrishnan-Questions-2400x1260-1-1200x630.jpg\",\"contentUrl\":\"https:\/\/moresourcing.com\/wp-content\/uploads\/2025\/09\/Ramakrishnan-Questions-2400x1260-1-1200x630.jpg\",\"width\":1200,\"height\":630},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/moresourcing.com\/how-llms-work\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/moresourcing.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"How LLMs Work: Top 10 Executive-Level Questions\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/moresourcing.com\/#website\",\"url\":\"https:\/\/moresourcing.com\/\",\"name\":\"MORE SOURCING LTD\",\"description\":\"Your Global Trade Experts\",\"publisher\":{\"@id\":\"https:\/\/moresourcing.com\/#organization\"},\"alternateName\":\"MORE SOURCING LTD\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/moresourcing.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"es\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/moresourcing.com\/#organization\",\"name\":\"MORE SOURCING LTD\",\"url\":\"https:\/\/moresourcing.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\/\/moresourcing.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/moresourcing.com\/wp-content\/uploads\/2025\/07\/cropped-cropped-MS-logo-02-scaled-2.png\",\"contentUrl\":\"https:\/\/moresourcing.com\/wp-content\/uploads\/2025\/07\/cropped-cropped-MS-logo-02-scaled-2.png\",\"width\":2558,\"height\":1273,\"caption\":\"MORE SOURCING LTD\"},\"image\":{\"@id\":\"https:\/\/moresourcing.com\/#\/schema\/logo\/image\/\"},\"ownershipFundingInfo\":\"https:\/\/moresourcing.com\/about-us\/\",\"ethicsPolicy\":\"https:\/\/moresourcing.com\/service\/\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/moresourcing.com\/#\/schema\/person\/2c9a233f0ad18413717419291cacdf69\",\"name\":\"MS\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\/\/moresourcing.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/fcff4c53e422761d0d6db624cdaf171933d38385c2c22c13ce39ea3918a9cd66?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/fcff4c53e422761d0d6db624cdaf171933d38385c2c22c13ce39ea3918a9cd66?s=96&d=mm&r=g\",\"caption\":\"MS\"},\"sameAs\":[\"https:\/\/moresourcing.com\"],\"url\":\"https:\/\/moresourcing.com\/es\/author\/moresourcing\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"How LLMs Work: Top 10 Executive-Level Questions - MORE SOURCING LTD","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/moresourcing.com\/es\/how-llms-work\/","og_locale":"es_ES","og_type":"article","og_title":"How LLMs Work: Top 10 Executive-Level Questions","og_description":"Topics Data, AI, &amp; Machine Learning AI &amp; Machine Learning Column Our expert columnists offer opinion and analysis on important issues facing modern businesses and managers. More in this series Carolyn Geason-Beissel\/MIT SMR | Getty Images Summary: Business leaders are now rethinking workflows, organizational design, and other disciplines as their companies embrace artificial intelligence and [&hellip;]","og_url":"https:\/\/moresourcing.com\/es\/how-llms-work\/","og_site_name":"MORE SOURCING LTD","article_published_time":"2025-09-24T12:59:48+00:00","author":"MS","twitter_card":"summary_large_image","twitter_misc":{"Escrito por":"MS","Tiempo de lectura":"13 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/moresourcing.com\/how-llms-work\/#article","isPartOf":{"@id":"https:\/\/moresourcing.com\/how-llms-work\/"},"author":{"name":"MS","@id":"https:\/\/moresourcing.com\/#\/schema\/person\/2c9a233f0ad18413717419291cacdf69"},"headline":"How LLMs Work: Top 10 Executive-Level Questions","datePublished":"2025-09-24T12:59:48+00:00","mainEntityOfPage":{"@id":"https:\/\/moresourcing.com\/how-llms-work\/"},"wordCount":2639,"commentCount":0,"publisher":{"@id":"https:\/\/moresourcing.com\/#organization"},"image":{"@id":"https:\/\/moresourcing.com\/how-llms-work\/#primaryimage"},"thumbnailUrl":"https:\/\/moresourcing.com\/wp-content\/uploads\/2025\/09\/Ramakrishnan-Questions-2400x1260-1-1200x630.jpg","articleSection":["Management"],"inLanguage":"es","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/moresourcing.com\/how-llms-work\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/moresourcing.com\/how-llms-work\/","url":"https:\/\/moresourcing.com\/how-llms-work\/","name":"How LLMs Work: Top 10 Executive-Level Questions - MORE SOURCING LTD","isPartOf":{"@id":"https:\/\/moresourcing.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/moresourcing.com\/how-llms-work\/#primaryimage"},"image":{"@id":"https:\/\/moresourcing.com\/how-llms-work\/#primaryimage"},"thumbnailUrl":"https:\/\/moresourcing.com\/wp-content\/uploads\/2025\/09\/Ramakrishnan-Questions-2400x1260-1-1200x630.jpg","datePublished":"2025-09-24T12:59:48+00:00","breadcrumb":{"@id":"https:\/\/moresourcing.com\/how-llms-work\/#breadcrumb"},"inLanguage":"es","potentialAction":[{"@type":"ReadAction","target":["https:\/\/moresourcing.com\/how-llms-work\/"]}]},{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/moresourcing.com\/how-llms-work\/#primaryimage","url":"https:\/\/moresourcing.com\/wp-content\/uploads\/2025\/09\/Ramakrishnan-Questions-2400x1260-1-1200x630.jpg","contentUrl":"https:\/\/moresourcing.com\/wp-content\/uploads\/2025\/09\/Ramakrishnan-Questions-2400x1260-1-1200x630.jpg","width":1200,"height":630},{"@type":"BreadcrumbList","@id":"https:\/\/moresourcing.com\/how-llms-work\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/moresourcing.com\/"},{"@type":"ListItem","position":2,"name":"How LLMs Work: Top 10 Executive-Level Questions"}]},{"@type":"WebSite","@id":"https:\/\/moresourcing.com\/#website","url":"https:\/\/moresourcing.com\/","name":"MORE SOURCING LTD","description":"Sus expertos en comercio mundial","publisher":{"@id":"https:\/\/moresourcing.com\/#organization"},"alternateName":"MORE SOURCING LTD","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/moresourcing.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"es"},{"@type":"Organization","@id":"https:\/\/moresourcing.com\/#organization","name":"MORE SOURCING LTD","url":"https:\/\/moresourcing.com\/","logo":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/moresourcing.com\/#\/schema\/logo\/image\/","url":"https:\/\/moresourcing.com\/wp-content\/uploads\/2025\/07\/cropped-cropped-MS-logo-02-scaled-2.png","contentUrl":"https:\/\/moresourcing.com\/wp-content\/uploads\/2025\/07\/cropped-cropped-MS-logo-02-scaled-2.png","width":2558,"height":1273,"caption":"MORE SOURCING LTD"},"image":{"@id":"https:\/\/moresourcing.com\/#\/schema\/logo\/image\/"},"ownershipFundingInfo":"https:\/\/moresourcing.com\/about-us\/","ethicsPolicy":"https:\/\/moresourcing.com\/service\/"},{"@type":"Person","@id":"https:\/\/moresourcing.com\/#\/schema\/person\/2c9a233f0ad18413717419291cacdf69","name":"MS","image":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/moresourcing.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/fcff4c53e422761d0d6db624cdaf171933d38385c2c22c13ce39ea3918a9cd66?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/fcff4c53e422761d0d6db624cdaf171933d38385c2c22c13ce39ea3918a9cd66?s=96&d=mm&r=g","caption":"MS"},"sameAs":["https:\/\/moresourcing.com"],"url":"https:\/\/moresourcing.com\/es\/author\/moresourcing\/"}]}},"_links":{"self":[{"href":"https:\/\/moresourcing.com\/es\/wp-json\/wp\/v2\/posts\/3341","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/moresourcing.com\/es\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/moresourcing.com\/es\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/moresourcing.com\/es\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/moresourcing.com\/es\/wp-json\/wp\/v2\/comments?post=3341"}],"version-history":[{"count":0,"href":"https:\/\/moresourcing.com\/es\/wp-json\/wp\/v2\/posts\/3341\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/moresourcing.com\/es\/wp-json\/wp\/v2\/media\/3342"}],"wp:attachment":[{"href":"https:\/\/moresourcing.com\/es\/wp-json\/wp\/v2\/media?parent=3341"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/moresourcing.com\/es\/wp-json\/wp\/v2\/categories?post=3341"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/moresourcing.com\/es\/wp-json\/wp\/v2\/tags?post=3341"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}