<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[UCAN AI - Excellence 90% Faster]]></title><description><![CDATA[Agents as a Service, Custom LLMs for Retail, Finance & Healthcare]]></description><link>https://blog.ucan.ai</link><generator>Substack</generator><lastBuildDate>Sat, 25 Apr 2026 12:37:17 GMT</lastBuildDate><atom:link href="https://blog.ucan.ai/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[UCAN AI]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[ucanai@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[ucanai@substack.com]]></itunes:email><itunes:name><![CDATA[UCAN AI]]></itunes:name></itunes:owner><itunes:author><![CDATA[UCAN AI]]></itunes:author><googleplay:owner><![CDATA[ucanai@substack.com]]></googleplay:owner><googleplay:email><![CDATA[ucanai@substack.com]]></googleplay:email><googleplay:author><![CDATA[UCAN AI]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Meta AI Proposes Large Concept Models (LCMs): A Semantic Leap Beyond Token-based Language Modeling]]></title><description><![CDATA[Researchers at Meta AI have proposed a new approach: Large Concept Models (LCMs)]]></description><link>https://blog.ucan.ai/p/meta-ai-proposes-large-concept-models</link><guid isPermaLink="false">https://blog.ucan.ai/p/meta-ai-proposes-large-concept-models</guid><dc:creator><![CDATA[UCAN AI]]></dc:creator><pubDate>Tue, 17 Dec 2024 03:06:33 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/153243596/830c00b59e3ee65b4eeecd70a8258695.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!vfCp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc9d37f-2e90-4328-b341-2c107b0ff4d9_1024x588.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!vfCp!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc9d37f-2e90-4328-b341-2c107b0ff4d9_1024x588.png 424w, https://substackcdn.com/image/fetch/$s_!vfCp!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc9d37f-2e90-4328-b341-2c107b0ff4d9_1024x588.png 848w, https://substackcdn.com/image/fetch/$s_!vfCp!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc9d37f-2e90-4328-b341-2c107b0ff4d9_1024x588.png 1272w, https://substackcdn.com/image/fetch/$s_!vfCp!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc9d37f-2e90-4328-b341-2c107b0ff4d9_1024x588.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!vfCp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc9d37f-2e90-4328-b341-2c107b0ff4d9_1024x588.png" width="1024" height="588" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ffc9d37f-2e90-4328-b341-2c107b0ff4d9_1024x588.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:588,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:149239,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!vfCp!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc9d37f-2e90-4328-b341-2c107b0ff4d9_1024x588.png 424w, https://substackcdn.com/image/fetch/$s_!vfCp!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc9d37f-2e90-4328-b341-2c107b0ff4d9_1024x588.png 848w, https://substackcdn.com/image/fetch/$s_!vfCp!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc9d37f-2e90-4328-b341-2c107b0ff4d9_1024x588.png 1272w, https://substackcdn.com/image/fetch/$s_!vfCp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc9d37f-2e90-4328-b341-2c107b0ff4d9_1024x588.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Large Language Models (LLMs) have achieved remarkable advancements in natural language processing (NLP), enabling applications in text generation, summarization, and question-answering. However, their reliance on token-level processing&#8212;predicting one word at a time&#8212;presents challenges. This approach contrasts with human communication, which often operates at higher levels of abstraction, such as sentences or ideas. Token-level modeling also struggles with tasks requiring long-context understanding and may produce outputs with inconsistencies. Moreover, extending these models to multilingual and multimodal applications is computationally expensive and data-intensive. To address these issues, researchers at Meta AI have proposed a new approach: Large Concept Models (LCMs).</p><p>Large Concept Models</p><p>Meta AI&#8217;s Large Concept Models (LCMs) represent a shift from traditional LLM architectures. LCMs bring two significant innovations:</p><p>1. High-dimensional Embedding Space Modeling: Instead of operating on discrete tokens, LCMs perform computations in a high-dimensional embedding space. This space represents abstract units of meaning, referred to as concepts, which correspond to sentences or utterances. The embedding space, called SONAR, is designed to be language- and modality-agnostic, supporting over 200 languages and multiple modalities, including text and speech.</p><p>2. Language- and Modality-agnostic Modeling: Unlike models tied to specific languages or modalities, LCMs process and generate content at a purely semantic level. This design allows seamless transitions across languages and modalities, enabling strong zero-shot generalization.</p><p>At the core of LCMs are concept encoders and decoders that map input sentences into SONAR&#8217;s embedding space and decode embeddings back into natural language or other modalities. These components are frozen, ensuring modularity and ease of extension to new languages or modalities without retraining the entire model.</p><p>Technical Details and Benefits of LCMs</p><p>LCMs introduce several innovations to advance language modeling:</p><p>- Hierarchical Architecture: LCMs employ a hierarchical structure, mirroring human reasoning processes. This design improves coherence in long-form content and enables localized edits without disrupting broader context.</p><p>- Diffusion-based Generation: Diffusion models were identified as the most effective design for LCMs. These models predict the next SONAR embedding based on preceding embeddings. Two architectures were explored:</p><p>  - One-Tower: A single Transformer decoder handles both context encoding and denoising.</p><p>  - Two-Tower: Separates context encoding and denoising, with dedicated components for each task.</p><p>- Scalability and Efficiency: Concept-level modeling reduces sequence length compared to token-level processing, addressing the quadratic complexity of standard Transformers and enabling more efficient handling of long contexts.</p><p>- Zero-shot Generalization: LCMs exhibit strong zero-shot generalization, performing well on unseen languages and modalities by leveraging SONAR&#8217;s extensive multilingual and multimodal support.</p><p>- Search and Stopping Criteria: A search algorithm with a stopping criterion based on distance to an &#8220;end of document&#8221; concept ensures coherent and complete generation without requiring fine-tuning.</p><p>Insights from Experimental Results</p><p>Meta AI&#8217;s experiments highlight the potential of LCMs. A diffusion-based Two-Tower LCM scaled to 7 billion parameters demonstrated competitive performance in tasks like summarization. Key results include:</p><p>- Multilingual Summarization: LCMs outperformed baseline models in zero-shot summarization across multiple languages, showcasing their adaptability.</p><p>- Summary Expansion Task: This novel evaluation task demonstrated the capability of LCMs to generate expanded summaries with coherence and consistency.</p><p>- Efficiency and Accuracy: LCMs processed shorter sequences more efficiently than token-based models while maintaining accuracy. Metrics such as mutual information and contrastive accuracy showed significant improvement, as detailed in the study&#8217;s results.</p><p>Conclusion</p><p>Meta AI&#8217;s Large Concept Models present a promising alternative to traditional token-based language models. By leveraging high-dimensional concept embeddings and modality-agnostic processing, LCMs address key limitations of existing approaches. Their hierarchical architecture enhances coherence and efficiency, while their strong zero-shot generalization expands their applicability to diverse languages and modalities. As research into this architecture continues, LCMs have the potential to redefine the capabilities of language models, offering a more scalable and adaptable approach to AI-driven communication.</p>]]></content:encoded></item><item><title><![CDATA[Anthropic - Introducing the Model Context Protocol]]></title><description><![CDATA[Model Context Protocol]]></description><link>https://blog.ucan.ai/p/anthropic-introducing-the-model-context</link><guid isPermaLink="false">https://blog.ucan.ai/p/anthropic-introducing-the-model-context</guid><dc:creator><![CDATA[UCAN AI]]></dc:creator><pubDate>Wed, 27 Nov 2024 17:49:09 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/152250199/2a9d3a699e1fc1b1d8a2275327662928.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Introducing the Model Context Protocol</p><p>Today, we're open-sourcing the Model Context Protocol (MCP), a new standard for connecting AI assistants to the systems where data lives, including content repositories, business tools, and development environments. Its aim is to help frontier models produce better, more relevant responses.</p><p>As AI assistants gain mainstream adoption, the industry has invested heavily in model capabilities, achieving rapid advances in reasoning and quality. Yet even the most sophisticated models are constrained by their isolation from data&#8212;trapped behind information silos and legacy systems. Every new data source requires its own custom implementation, making truly connected systems difficult to scale.</p><p>MCP addresses this challenge. It provides a universal, open standard for connecting AI systems with data sources, replacing fragmented integrations with a single protocol. The result is a simpler, more reliable way to give AI systems access to the data they need.</p><p>The Model Context Protocol is an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools. The architecture is straightforward: developers can either expose their data through MCP servers or build AI applications (MCP clients) that connect to these servers.</p><p>Today, we're introducing three major components of the Model Context Protocol for developers:</p><p>- The Model Context Protocol specification and SDKs</p><p>- Local MCP server support in the Claude Desktop apps</p><p>- An open-source repository of MCP servers</p><p>Claude 3.5 Sonnet is adept at quickly building MCP server implementations, making it easy for organizations and individuals to rapidly connect their most important datasets with a range of AI-powered tools. To help developers start exploring, we&#8217;re sharing pre-built MCP servers for popular enterprise systems like Google Drive, Slack, GitHub, Git, Postgres, and Puppeteer.</p><p>Early adopters like Block and Apollo have integrated MCP into their systems, while development tools companies including Zed, Replit, Codeium, and Sourcegraph are working with MCP to enhance their platforms&#8212;enabling AI agents to better retrieve relevant information to further understand the context around a coding task and produce more nuanced and functional code with fewer attempts.</p><p>"At Block, open source is more than a development model&#8212;it&#8217;s the foundation of our work and a commitment to creating technology that drives meaningful change and serves as a public good for all,&#8221; said Dhanji R. Prasanna, Chief Technology Officer at Block. &#8220;Open technologies like the Model Context Protocol are the bridges that connect AI to real-world applications, ensuring innovation is accessible, transparent, and rooted in collaboration. We are excited to partner on a protocol and use it to build agentic systems, which remove the burden of the mechanical so people can focus on the creative.&#8221;</p><p>Instead of maintaining separate connectors for each data source, developers can now build against a standard protocol. As the ecosystem matures, AI systems will maintain context as they move between different tools and datasets, replacing today's fragmented integrations with a more sustainable architecture.</p><p>Getting started</p><p>Developers can start building and testing MCP connectors today. Existing Claude for Work customers can begin testing MCP servers locally, connecting Claude to internal systems and datasets. We'll soon provide developer toolkits for deploying remote production MCP servers that can serve your entire Claude for Work organization.</p><p>To start building:</p><p>- Install pre-built MCP servers through the Claude Desktop app</p><p>- Follow our quickstart guide to build your first MCP server</p><p>- Contribute to our open-source repositories of connectors and implementations</p><p>An open community</p><p>We&#8217;re committed to building MCP as a collaborative, open-source project and ecosystem, and we&#8217;re eager to hear your feedback. Whether you&#8217;re an AI tool developer, an enterprise looking to leverage existing data, or an early adopter exploring the frontier, we invite you to build the future of context-aware AI together.</p>]]></content:encoded></item><item><title><![CDATA[Google’s connecting Spotify to its Gemini AI assistant]]></title><description><![CDATA[Gemini AI assistant]]></description><link>https://blog.ucan.ai/p/googles-connecting-spotify-to-its</link><guid isPermaLink="false">https://blog.ucan.ai/p/googles-connecting-spotify-to-its</guid><dc:creator><![CDATA[UCAN AI]]></dc:creator><pubDate>Wed, 27 Nov 2024 17:43:44 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/152249953/d73f7914be1c0e228a83261eb6ca2789.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Google is integrating Spotify with its Gemini AI assistant, enabling users to search for and play music through natural language requests. This feature, first detected in the Google app's code in June, is now being rolled out to compatible Android devices.</p><p>With Gemini, users can play music by specifying song titles, artist names, album names, playlist names, or even activities. However, it currently does not support creating playlists or radio stations on Spotify. Notably, if another music service, such as YouTube Music, is already linked, users must specify their desired service in their requests, as Gemini will default to the last used service.   </p><p>To utilize this extension, users must link their Spotify and Google accounts and enable Gemini Apps Activity, which retains AI queries for up to 72 hours. It is important to note that the Spotify extension is not available within Google Messages, the Gemini web app, or the Gemini app on iOS. Additionally, the functionality is limited to English language settings for Gemini.</p>]]></content:encoded></item><item><title><![CDATA[Uber for AI labeling]]></title><description><![CDATA[AI Labeling]]></description><link>https://blog.ucan.ai/p/uber-for-ai-labeling</link><guid isPermaLink="false">https://blog.ucan.ai/p/uber-for-ai-labeling</guid><dc:creator><![CDATA[UCAN AI]]></dc:creator><pubDate>Wed, 27 Nov 2024 17:24:30 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/152249279/e132ab193536a4dde98608ee9dfea902.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Uber has introduced a new initiative for its independent contractors focused on AI training. According to a report by Bloomberg, the ride-hailing and delivery company is leveraging gig workers to enter the AI labeling industry. This move highlights Uber&#8217;s interest in expanding its independent contractor-driven business model to align with the rapidly growing field of machine learning and large language models.</p><p>The newly established &#8220;Scaled Solutions&#8221; division aims to connect businesses with &#8220;nuanced analysts, testers, and independent data operators&#8221; through its platform. This division builds on an existing internal team based in the US and India, which has been responsible for feature testing and converting restaurant menus for Uber Eats.</p><p>Uber has already been employing artificial intelligence and machine learning for its operations and is now making these capabilities available to external clients for a fee. The company is recruiting gig workers for tasks such as data labeling, testing, and localization for various companies, including Aurora, Luma AI, and Niantic.</p><p>Training AI models necessitates the involvement of numerous human workers to perform repetitive tasks. A significant aspect of AI model training involves tedious activities such as selecting the most human-like chatbot responses or labeling obstacles, like pedestrians, in self-driving car footage on a frame-by-frame basis. Companies developing AI models frequently hire workers from developing countries to carry out these tasks, compensating them with minimal payments for each completed activity. For instance, an engineer in India shared with Bloomberg that they were responsible for evaluating and rating the accuracy of AI-generated responses to complex coding problems, earning approximately 200 rupees per set, equivalent to about $2.37.</p><p>Currently, Uber is onboarding individuals from Canada, India, Poland, Nicaragua, and the US, offering differing pay rates per task completed, with earnings distributed to workers on a monthly basis. The company is also seeking individuals from diverse cultural backgrounds to enhance AI adaptability across various markets.</p><p>This initiative is not Uber's first venture into the AI space. The company previously invested billions in developing its own self-driving car technology but halted the project following a tragic incident involving one of its vehicles and a pedestrian. Additionally, in 2016, Uber acquired an AI research lab founded by cognitive scientist Gary Marcus and several other computer science professors.</p>]]></content:encoded></item><item><title><![CDATA[Anthropic says Claude AI can match your unique writing style]]></title><description><![CDATA[Claude AI Assistant]]></description><link>https://blog.ucan.ai/p/anthropic-says-claude-ai-can-match</link><guid isPermaLink="false">https://blog.ucan.ai/p/anthropic-says-claude-ai-can-match</guid><dc:creator><![CDATA[UCAN AI]]></dc:creator><pubDate>Wed, 27 Nov 2024 17:12:50 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/152248548/78103949e13c848d8b852721395bea4f.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Anthropic says Claude AI can match your unique writing style - The Verge</p><p>Anthropic is adding custom styles to Claude that quickly adjust the tone and length of responses. Anthropic is adding a new feature to its Claude AI assistant that will give users more control over how the chatbot responds to different writing tasks. The new custom styles are available to all Claude AI users, enabling anyone to train it to match their own communication style or select from preset options to quickly adjust the tone and level of detail it provides.</p><p>This update aims to personalize the chatbot&#8217;s replies and make them feel more natural or appropriate for specific applications, such as writing detailed technical documents or professional emails. Three preset styles are available: Formal for &#8220;clear and polished&#8221; text, Concise for shorter and more direct responses, and Explanatory for educational replies that need to include additional detail. If these don&#8217;t suit your requirements, Claude can also generate custom styles that are trained to mimic other writing mannerisms. Anthropic says users need to upload &#8220;sample content that reflects your preferred way of communicating&#8221; to the chatbot, and then instruct it on how to match the writing style.</p><p>Creating a custom style is effectively an easy way to automate how you engineer prompts to make responses sound more like your own personal style. &#8220;You might want in-depth explanations when you&#8217;re learning something new, or quick, straight-to-the-point answers when you&#8217;re in a hurry,&#8221; Claude&#8217;s Product leader Scott White said in the announcement. &#8220;You might prefer Claude to be more formal in some contexts, or use a friendly, conversational tone in others. The great thing is, you can now set these preferences once and have every interaction feel just right for you.&#8221;</p><p>It&#8217;s a welcome update that may make it less obvious when you&#8217;re using Claude to write different kinds of text, but the feature is not unique to Anthropic&#8217;s chatbot. OpenAI&#8217;s ChatGPT and Google&#8217;s Gemini have similar features that allow users to tailor responses based on their personal style of writing, for example, and Gemini can quickly adjust the tone or detail of Gmail drafts. The Writing Tools feature in Apple Intelligence also provides similar style presets.</p>]]></content:encoded></item><item><title><![CDATA[Coming soon]]></title><description><![CDATA[This is UCAN AI - Excellence 90% Faster.]]></description><link>https://blog.ucan.ai/p/coming-soon</link><guid isPermaLink="false">https://blog.ucan.ai/p/coming-soon</guid><dc:creator><![CDATA[UCAN AI]]></dc:creator><pubDate>Sun, 08 Sep 2024 15:07:53 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!yODi!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb05d8eb-f2cd-4eb9-8c17-e2aa4d75f026_544x544.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This is UCAN AI - Excellence 90% Faster.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.ucan.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.ucan.ai/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item></channel></rss>