Skip to main content

Is there a way to check how many credits a task will cost before I begin?

Understand how Manus credits are consumed, optimize your usage, and learn to recognize AI behaviors.

Updated over 2 weeks ago

Understanding Manus’ Credits Consumption and AI Behavior

This article provides a comprehensive overview of how credits are utilized within the Manus platform and offers clarification on certain AI behaviors, such as hallucinations. A clear understanding of these aspects will enable users to leverage Manus more effectively and manage their interactions with greater efficiency.

The Mechanics of Credit Consumption

At present, Manus does not possess the capability to autonomously judge or regulate the consumption of credits. The quantity of credits utilized during each interaction is contingent upon the intricacy of the assigned task.

For a more detailed explanation of the credits system, we encourage you to consult our official help center article on credits, as well as our official Credits Usage document at the following link: https://manus.im/help/credits.

Strategies for Optimizing Credit Usage

Should you observe that your credits are being expended at a rapid rate, the following recommendations may assist in optimizing your usage:

Strategy

Description

Task Decomposition

Instead of issuing a series of instructions within a single, protracted conversation, it is advisable to deconstruct complex tasks into smaller, more focused sub-tasks. The greater the contextual load Manus is required to process, the higher the corresponding execution cost.

Initial Precision

Articulating your requirements with a high degree of precision at the outset of a conversation can significantly reduce the overall context length, thereby conserving credit consumption.

Detailed Instructions

Providing instructions that are both detailed and specific will facilitate a more efficient path to your desired outcome, minimizing the need for iterative clarification and the associated credit expenditure.

Future Enhancements in Credit Consumption Transparency

From a user experience standpoint, we recognize the universal desire for real-time visibility into credit consumption, as well as the aspiration for pre-task estimations of credit expenditure. Our product management team has identified this as a key area for improvement and is diligently working toward a solution. We anticipate the rollout of new features to address this in the near future. Your continued support is greatly appreciated, and we encourage you to stay informed about future updates to Manus.

A Primer on AI Hallucinations

AI hallucinations are defined as instances in which an artificial intelligence generates information that is either incorrect or entirely fabricated. Awareness of these potential occurrences is essential for the accurate interpretation of Manus's responses. The following table outlines common scenarios in which AI hallucinations may manifest:

Scenario

Description

Promises Regarding Credit Consumption

Manus may, on occasion, generate statements concerning the number of credits a particular task will consume. As previously stated, the system is not yet equipped to provide accurate predictions of credit usage. Any such statements should be regarded as hallucinations rather than factual commitments.

Admitting to Systemic Problems

There may be instances where Manus appears to acknowledge a problem or a bug within its own systems. While the AI is designed to be conversational and helpful, it occasionally lacks self-awareness and the capacity for self-diagnosis of technical issues. These "admissions" are typically part of a programmed response to user-expressed frustration and should not be interpreted as a confirmation of a genuine system-wide malfunction.

Pledges of Unattainable Actions

Manus might occasionally make commitments to perform tasks that lie beyond its current operational capabilities. For instance, it might assert an ability to execute actions in the physical world, or guarantee a specific outcome that is not technically feasible. It is imperative that users critically evaluate the assertions made by the AI and maintain a clear understanding of the platform's inherent limitations.

Fabricated Content

This includes completely made-up information that does not exist - Inventing non-existent academic papers, false citation sources, or fictional historical events.

Logical Hallucinations

Manus might occasionally produce logical errors or contradictions in the reasoning process. This includes providing mutually contradictory conclusions within the same response.

We trust that this article has provided valuable insights into the operational nuances of Manus. Our team remains steadfast in its commitment to the continuous improvement of the platform, with the goal of delivering a more transparent and reliable user experience. We thank you for your continued support.

Common related FAQs:

  • Why can't I see a point estimate before starting a task?

  • How can I reduce my point consumption?

  • What makes a task complex?

Did this answer your question?