→ know the cost of your taskmaster calls
→ @ollama provider support
→ baseUrl support across roles
→ task complexity score in tasks
→ strong focus on fixes & stability
→ over 9,500⭐ on @github
follow + bookmark + dive in
👀👇
1
introducing cost telemetry for ai commands
→ costs reported across ai providers
→ breaks down input/output token usage
→ calculates cost of ai command
→ both CLI & MCP
we don't store this information yet
but it will eventually be used to power model leaderboards +++
2
knowing the cost of ai commands might make you more sensitive to certain providers
@ollama support uses your local ai endpoint to power @taskmasterai ai commands at no cost
→ use any installed model
→ models without tool_use are not ideal
→ telemetry will show $0 cost
3
baseUrl support has been added to let you adjust the endpoint for any of the 3 roles
you can adjust this by adding 'baseUrl' to any of the roles in .taskmasterconfig
this opens up some support for currently unsupported ai providers like @awscloud @Azure
4
after parsing a PRD into tasks, using analyze-complexity asks ai to score how complex your tasks are and to figure out how many subtasks you need based on their complexity
task complexity scores now appear across task lists, next task, and task details views
s/o @JoeDanz
5
main focus on bug fixes across the stack & stability is now at an all-time high
→ npm i -g task-master-ai@latest
→ MCP auto-updates
→ founding team is locked-in!
→ join discord.gg/taskmasterai
→ more info task-master.dev
→ hug your loved ones
vibe on friends 🩶
help taskmaster reach more people by retweeting the first tweet and sharing it with your friends who are tired of getting stuck coding with ai
we've completely redesigned init, it now offer 2 modes:
SOLO (local): prds and tasks in local files
TOGETHER (hamster): briefs and tasks on @usehamster
→ plans created on hamster, executed with taskmaster
→ login / signup to hamster from CLI
2
tm export turns your local tasks into shareable team plans
→ reverse-engineers a PRD from your tasks
→ both plan + tasks live on @usehamster
→ implement with team, same taskmaster workflow
→ no api keys, hamster handles all inference
→ no waitlist
→ mcp sampling support (zero api keys)
→ @geminicli integration & provider
→ advanced @claude_code rules
→ language override
→ @grok 4 support
→ @GroqInc support
→ parse-prd auto-selects task number
& more
follow + bookmark + vibe
👀👇
1
#1 request! you can now use taskmaster without api keys in any mcp client that supports sampling
→ client-side llm provider for all roles
→ setup with init - auto-detects local models
→ no extra cost or api keys needed
FYI: only @code supports sampling rn
s/o @OrenMe
2
@geminicli is now integrated as a provider for taskmaster ops with zero api keys required
→ add gemini rules anytime
→ supports free tier & paid gemini cloud assist
→ use as provider for main/fallback/research roles
→ updated for recent @geminicli changes
→ @claude_code provider without API keys
→ init with IDE-specific profiles for @cursor_ai @claude_code @windsurf_ai @roo_code @cline @code @Trae_ai
→ cleaner git repos & Python support
→ PROVIDER_BASE_URL
follow + bookmark + dive in
👀👇
1
we've shipped @claude_code provider support, which is now available for Taskmaster ops without the need for an Anthropic API key
→ CLI integration for Opus and Sonnet models
→ uses your Claude Code client for Taskamster ops
→ totally optional
s/o @neno_is_ooo @RalphEcom
you can setup Claude Code as your provider through either the CLI or the MCP
→ CLI: use the `models` command with --claude-code
→ MCP: simply tell the agent to set your model using the claude-code provider
→ telemetry details will indicate $0.00 cost for taskmaster commands
→ ai model/provider management
→ @OpenAI @GoogleAI @xai @OpenRouterAI
→ @roo_code support
→ smarter parse-prd & expand, next is now subtask-aware, --research for add-task +++
→ 7.5k⭐ + 10k downloads/week
follow + bookmark + dive in
👀👇
1
our #1 feature request is fulfilled. you can now use taskmaster with 200+ models across 6 ai providers, with many more on the way. full cli + mcp support.
new:
@OpenAI @GoogleAI @xai @OpenRouterAI
soon:
@ollama @Azure @awscloud + BASE_URL in config
keep reading
2
ai models can now be used across 3 distinct roles:
→ main
→ research
→ fallback
supported models for each of these roles have been streamlined plus custom model support via @OpenRouterAI