ai analyst backend bitcoin blockchain community manager crypto cryptography cto customer support dao data science defi design developer relations devops discord economy designer entry level erc erc 20 evm front end full stack gaming ganache golang hardhat intern java javascript layer 2 marketing mobile moderator nft node non tech open source openzeppelin pay in crypto product manager project manager react refi research ruby rust sales smart contract solana solidity truffle web3 py web3js zero knowledge
Job Position | Company | Posted | Location | Salary | Tags |
---|---|---|---|---|---|
Coin Market Cap Ltd | Hong Kong, Hong Kong | $71k - $103k | |||
Caiz | Remote | $80k - $150k | |||
Sentient | India | $104k - $110k | |||
Binance | Taipei, Taiwan |
| |||
Learn job-ready web3 skills on your schedule with 1-on-1 support & get a job, or your money back. | | by Metana Bootcamp Info | |||
Nearfoundation | Remote | $87k - $112k | |||
Binance | Singapore, Singapore |
| |||
CAIZ | Remote | $80k - $150k | |||
Albusleo Ventures Inc. | Miami, FL, United States | $63k - $75k | |||
Kronosresearch | Remote | $121k - $125k | |||
Wf | New York, NY, United States | $115k - $206k | |||
Nethermind | Remote | $84k - $115k | |||
Genies | Remote | $45k - $63k | |||
Nansen | Remote | $105k - $108k | |||
Chainreaction | Tel Aviv, Israel | $115k - $126k | |||
Coinbase | Remote | $152k - $179k |
Coin Market Cap Ltd
$71k - $103k estimated
LLM Algorithm Engineer
Global / Hong Kong / Kuala Lumpur / London / Penang / Singapore / Taipei
CMC /
Full-time /
Remote
Apply for this job
Job Responsibilities:
1. Advanced post-training of large language models (e.g. SFT, RLHF/RLAIF, continual pretraining).
2. Aligning models for reliable JSON-schema function calls and external tool usage.
3. Design, deploy, and operate Model Context Protocol (MCP) servers that handle checkpoint routing, manage context windows, and enforce safety gates.
4. Experience in distributed training and inference with DeepSpeed/FSDP, LoRA/QLoRA, mixed precision, and performance tuning on vLLM or Triton clusters.
5. Build offline and live eval pipelines for alignment, factuality, grounding, and hallucinations.
Qualifications
1. Bachelor’s or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field.
2. 3+ years of experience in developing and optimizing large language models.
3. Proven track record in implementing advanced post-training techniques (SFT, RLHF, RLAIF, continual pretraining).
4. Hands-on experience with distributed training frameworks (DeepSpeed, FSDP) and optimization techniques (LoRA, QLoRA, mixed precision).
5. Familiarity with model alignment, JSON-schema function calls, and external tool integration.
6. Experience in building and maintaining evaluation pipelines for model performance assessment.
7. Proficiency in Python and relevant machine learning frameworks (e.g., PyTorch, TensorFlow).
8. Strong understanding of distributed systems and high-performance computing.
9. Experience with model deployment and inference optimization on vLLM or Triton clusters.
10. Knowledge of JSON-schema and API development.
Apply for this job