OpenAI API Key

Machined allows you to generate an unlimited number of content clusters and articles for an insanely low cost. We are able to do this by offloading the AI costs directly to you, the user. In essence, you pay for your own AI usage directly with OpenAI. This means that you will need to manage your own OpenAI account and provide your own API keys.

Follow our guide to configuring your api keys.

Troubleshooting

If you experience errors writing articles, the most common issues are related to the OpenAI account and/or the keys being used, or their service status.

  1. Ensure you have an OpenAI Developer account with billing enabled (credit card on file)

  2. Ensure you have access to GPT-4 if you intend to use it

  3. Check whether there are any intermittent errors

  4. Check whether your account is rate limited

Please be aware that some errored articles may still incur a token cost if errors occur midway through structuring or writing articles.

Intermittent Errors

OpenAI APIs often experience intermittent errors. Sometimes certain models have slow response times, sometimes models are unresponsive and at times the entire API might be unavailable. You can check the OpenAI Status Page for more info, but bear in mind that it is usually updated retrospectively rather than in real time.

Please note that there is nothing we can do to circumvent intermittent OpenAI errors, please keep a close eye on their status page and reach out if you are experiencing unexpected errors.

Rate Limits

OpenAI imposes rate limits on all accounts, this prevents any one account from making too many requests and impacting the service for others, and allows OpenAI to better manage their infrastructure and services. These rate limits are imposed on all API calls with the use of specific errors (called a 429 response) that tell the caller that they have breached a rate limit and to slow down with requests. You can read more about rate limits on the OpenAI webpage.

Machined will deal with most rate limits using what's called an exponential backoff strategy as recommended by OpenAI. In essence, when OpenAI tells us that we are making too many requests, we back off from making further requests for a short time. Doing this allows us to write articles in the fastest way possible while also taking measures to not overwhelm your OpenAI account, and not compromise your account's standing with OpenAI.

Please note that our backoff strategy will only retry up to a certain number of times before quitting with a final error.

Storage and Security

Your api keys are encrypted at rest and in transit using AES-256 (Advanced Encryption Standard), the same standard used by the US Government to protect their own files.

Once a key is set, it is never stored or transferred in plan text. Furthermore, two separate security secrets which are stored on separate servers are needed to decrypt the key on each usage.

Last updated