Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Apple researchers refine StarChat-Beta into UICoder for UI coding

UICoder, Apple’s new SwiftUI code generator, was trained through a feedback loop producing 996K curated examples, achieving higher compilation success than GPT-4 in internal tests.

byEmre Çıtak
August 15, 2025
in Research

Apple researchers developed a method to train an open-source large language model, StarChat-Beta, to generate SwiftUI user interface code by creating a large synthetic dataset and iteratively refining it through automated feedback.

The research, detailed in the paper “UICoder: Finetuning Large Language Models to Generate User Interface Code through Automated Feedback,” addresses challenges faced by large language models (LLMs) in generating syntactically correct and well-designed user interface (UI) code. LLMs exhibit proficiency in various writing tasks, including creative writing and general coding, but encounter difficulties with UI code generation. This limitation stems from a scarcity of UI code examples within the datasets used for training, even in curated or manually authored fine-tuning datasets, where UI code can constitute less than one percent of the total examples.

To overcome this data sparsity, researchers initiated their approach using StarChat-Beta, an open-source LLM specifically designed for coding tasks. They provided StarChat-Beta with a collection of UI descriptions, instructing the model to generate a substantial synthetic dataset comprising SwiftUI programs derived from these descriptions. This synthetic generation phase aimed to produce a broad initial set of UI code examples.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Following the generation of code, each program underwent a two-stage validation process. First, the code was run through a Swift compiler to verify its executable status. Second, GPT-4V, a vision-language model, analyzed the compiled interface, comparing it against the original UI description to assess fidelity and correctness.

Outputs that failed to compile, were deemed irrelevant to the description, or were duplicates were systematically discarded. The remaining outputs, having met the compilation and relevance criteria, formed a high-quality training set. This refined dataset was subsequently used to fine-tune the StarChat-Beta model.

The researchers implemented an iterative refinement process, repeating the entire generation and validation cycle multiple times. Each iteration demonstrated an improvement in the model’s ability to generate SwiftUI code, which, in turn, contributed to the creation of even cleaner and more accurate datasets for subsequent fine-tuning rounds. This continuous feedback loop was central to the model’s progressive enhancement.

After completing five full rounds of this iterative process, the researchers had amassed approximately 996,000 distinct SwiftUI programs. This extensive dataset was used to train the final model, named UICoder. Tests conducted on UICoder indicated that it consistently compiled and produced interfaces that aligned significantly closer to the original prompts compared to the initial StarChat-Beta model. Automated metrics and human evaluations both confirmed UICoder’s substantial outperformance of the base StarChat-Beta model in generating SwiftUI code.

UICoder also demonstrated capabilities comparable to GPT-4 in terms of overall code quality, and notably surpassed GPT-4 in its compilation success rate. A significant finding from the study was the accidental exclusion of SwiftUI code from StarChat-Beta’s initial training data. StarChat-Beta was primarily trained on three corpora: TheStack, a large dataset of permissively licensed code repositories featuring 250 billion tokens; crawled web pages; and OpenAssistant-Guanaco, a smaller instruction-tuning dataset.

The researchers determined that Swift code repositories were inadvertently excluded during the creation of TheStack dataset. Furthermore, manual inspection revealed that the OpenAssistant-Guanaco dataset contained only a single example of Swift code out of ten thousand entries in its response field. Researchers hypothesized that any Swift examples encountered by StarChat-Beta during its initial training likely originated from crawled web pages, which tend to be of lower quality and less structured than repository code.

This inadvertent exclusion implies that UICoder’s performance gains were not attributable to the re-processing of pre-existing SwiftUI examples from its base training, as there were virtually none. Instead, the improvements stemmed entirely from the self-generated, rigorously curated datasets developed through Apple’s automated feedback loop.

This outcome led the researchers to hypothesize that their method, while proven effective for implementing UIs using SwiftUI, possesses the potential to generalize to other programming languages and UI toolkits. The full study is accessible on arXiv.


Featured image credit

Tags: AppleUICoder

Related Posts

Researchers combine OLEDs and metasurfaces to advance holographic displays

Researchers combine OLEDs and metasurfaces to advance holographic displays

October 1, 2025
Diraq, Imec show 99% fidelity in silicon qubit production

Diraq, Imec show 99% fidelity in silicon qubit production

October 1, 2025
CDU study: AI threatens human dignity globally

CDU study: AI threatens human dignity globally

October 1, 2025
HBR: Dysfunctional idea marketplaces stall innovation

HBR: Dysfunctional idea marketplaces stall innovation

October 1, 2025
OpenAI: GDPval framework tests AI on real-world jobs

OpenAI: GDPval framework tests AI on real-world jobs

September 26, 2025
Hugging Face: AI video energy use scales non-linearly

Hugging Face: AI video energy use scales non-linearly

September 26, 2025

LATEST NEWS

CDU study: AI threatens human dignity globally

Amazon Kindle Scribe Colorsoft adds color, AI tools

Sony WH-1000XM5/6 adds Gemini Live, Fast Pair audio share

WhatsApp: Meta AI to get incognito mode for private chats

PayPal Honey integrates with ChatGPT for product deals

Microsoft Copilot tests portraits using VASA-1 AI

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.