API Troubleshooting and Support

Troubleshooting, error handling, and other tidbits.

Despite our best efforts, you may need some extra assistance in understanding why something isn't working quite right.  Well, maybe not you - but certainly someone else though, right?

Step 1

if you haven't already - visit the visit our Swagger page here.

The Swagger page contains key information on all error handling for each route and is incredibly helpful for resolving issues.

Step 2

Need additional support? You can also reach out to our support team. Send us an email at api@meetgradient.com   

Support

Miscellaneous Technical Info

Error Handling

When working with our API it is important to understand best practices for handling errors.  Server errors are usually considered temporary errors and requests should be repeated.  If the request continues to occur, it should be submitted for review.  When it comes to these errors, the request should be stopped immediately.

Retry Policy

Our application, like all cloud-based applications, can be sensitive to transient faults such as the momentary loss of network connectivity, the temporary unavailability of a service, or timeouts that occur when a service is busy.

These faults are typically self-correcting, and if the action that triggered a fault is repeated after a suitable delay it's likely to be successful. An application trying to access the database might fail to connect, but if it tries again after a delay it might succeed.

If an application detects a failure when it tries to send a request, it will handle the failure using the following:

  • Cancel
    If the fault indicates that the failure isn't transient or is unlikely to be successful if repeated, the application should cancel the operation and report an exception. For example, an authentication failure caused by providing invalid credentials or any 400 errors is not likely to succeed no matter how many times it's attempted.
  • Retry
    If the specific fault reported is unusual or rare, it might have been caused by unusual circumstances such as a corrupt network packet. In this case, we retry the failing request again immediately as the same failure is unlikely to be repeated and the request will probably be successful.
  • Retry after delay
    If the fault is caused by one of the more commonplace connectivity or busy failures, the network or service might need a short period while the connectivity issues are corrected or the backlog of work is cleared. The application should wait for a suitable time before retrying the request.

The retry policy should be tuned to match the nature of the failure. For some non-critical operations, it's better to fail fast rather than retry several times and impact throughput. For example, in an interactive web application accessing a remote service, it's better to fail after a smaller number of retries with only a short delay between retry attempts, and display a suitable message to the user (for example, “please try again later”). For a batch application, it might be more appropriate to increase the number of retry attempts with an exponentially increasing delay between attempts.

500 Server Errors

Numerous components can generate errors anywhere in the life of a given request. The usual technique for dealing with these error responses is to implement retries. This technique increases reliability and reduces operational costs.

400 Client Errors

A 400 error means that the request isn't valid.  In the case of a 400 error, stop the requests and analyze the results in order to resolve the issue.  

 

Scientist