Overview
Join Tether and shape the future of digital finance. At Tether, we’re building solutions that empower businesses to integrate reserve-backed tokens across blockchains. Our technology enables you to store, send, and receive digital tokens instantly, securely, and globally, with transparency at the core of every transaction.
Innovate with Tether. Tether Finance features the world’s most trusted stablecoin, USDT, and pioneering digital asset tokenization services. Tether Power drives sustainable growth through eco-friendly energy solutions for Bitcoin mining. Tether Data fuels breakthroughs in AI and peer-to-peer technology with solutions like KEET, our flagship app for secure and private data sharing. Tether Education democratizes access to digital learning, and Tether Evolution explores the intersection of technology and human potential.
Why join us? Our team is a global talent powerhouse working remotely from around the world. If you’re passionate about fintech and ready to collaborate with leading minds, this is your opportunity to push boundaries and set new standards. Excellent English communication skills are a plus as you contribute to our platform.
About the job
We are looking for an experienced AI Model Engineer with deep expertise in kernel development, model optimization, fine-tuning, and GPU acceleration. The engineer will extend the inference framework to support inference and fine-tuning for language models with a strong focus on mobile and integrated GPU acceleration (Vulkan).
This role requires hands-on experience with quantization techniques, LoRA architectures, a Vulkan backend, and mobile GPU debugging. You will play a critical role in pushing the boundaries of desktop and on-device inference and fine-tuning performance for next-generation SLM / LLMs.
Responsibilities
- Implement and optimize custom inference and fine-tuning kernels for small and large language models across multiple hardware backends.
- Implement and optimize full and LoRA fine-tuning for small and large language models across multiple hardware backends.
- Design and extend datatype and precision support (int, float, mixed precision, ternary QTypes, etc.).
- Design, customize, and optimize Vulkan compute shaders for quantized operators and fine-tuning workflows.
- Investigate and resolve GPU acceleration issues on Vulkan and integrated / mobile GPUs.
- Architect and prepare support for advanced quantization techniques to improve efficiency and memory usage.
- Debug and optimize GPU operators (e.g., int8, fp16, fp4, ternary).
- Integrate and validate quantization workflows for training and inference.
- Conduct evaluation and benchmarking (e.g., perplexity testing, fine-tuned adapter performance).
- Conduct GPU testing across desktop and mobile devices.
- Collaborate with research and engineering teams to prototype, benchmark, and scale new model optimization methods.
- Deliver production-grade, efficient language model deployment for mobile and edge use cases.
- Work with cross-functional teams to integrate optimized serving and inference frameworks into production pipelines designed for edge and on-device applications. Define clear success metrics such as improved real-world performance, low error rates, robust scalability, and optimal memory usage, with ongoing monitoring and iterative refinements.
- Proficiency in C++ and GPU kernel programming.
- Proven expertise in GPU acceleration with Vulkan framework.
- Strong background in quantization and mixed-precision model optimization.
- Experience and expertise in Vulkan compute shader development and customization.
- Familiarity with LoRA fine-tuning and parameter-efficient training methods.
- Ability to debug GPU-specific performance and stability issues on desktop and mobile devices.
- Hands-on experience with mobile GPU acceleration and model inference.
- Familiarity with large language model architectures (e.g., Qwen, Gemma, LLaMA, Falcon, etc.).
- Experience implementing custom backward operators for fine-tuning.
- Experience creating and curating custom datasets for style transfer and domain-specific fine-tuning.
- Demonstrated ability to apply empirical research to overcome challenges in model development.
Important information for candidates
Apply only through our official channels. We do not use third-party platforms or agencies for recruitment unless clearly stated. All open roles are listed on our official careers page : tether.recruitee.comVerify the recruiter’s identity. All our recruiters have verified LinkedIn profiles. If you’re unsure, you can confirm their identity by checking their profile or contacting us through our website.Be cautious of unusual communication methods. We do not conduct interviews over WhatsApp, Telegram, or SMS. All communication is done through official company emails and platforms.Double-check email addresses. All communication from us will come from emails ending in tether.to or tether.io.We will never request payment or financial details. If someone asks for personal financial information or payment during the hiring process, it is a scam. Please report it immediately.When in doubt, feel free to reach out through our official website.
#J-18808-Ljbffr