- Design, implement, and scale ML systems that extract high-value semantic signals from structured and unstructured content
- Contribute to retrieval and ranking pipelines using techniques in query understanding, semantic embedding, and dense/sparse indexing
- Fine-tune and apply Large Language Models (LLMs) for NLP tasks like content labeling, rewriting, and semantic similarity
- Construct and use knowledge graphs and entity linking systems for enriching creative and query signals
- Work with multimodal data (e.g., combining text, image, and metadata signals) to build robust, cross-domain signal representations
This role focuses on developing rich semantic signals from a variety of sources—including queries, creatives, metadata, and user interactions—to support scalable ad retrieval, creative ranking, and marketplace optimization. You'll work at the forefront of LLM fine-tuning, knowledge graph construction, semantic search, and multimodal representation learning to extract structured intelligence from unstructured data.
- Build core components for a content understanding platform, such as entity extraction, topic modeling, creative summarization, and taxonomy generation
- Own experimentation, offline evaluation, and online validation of signal pipelines at massive scale
- Collaborate across engineering, infra, and product teams to productionize systems while meeting Apple’s high standards for reliability and privacy