das moran fool.1072:

Are there suggested/hard limits as to how often we can call the APIs?

I’m writing a desktop program for viewing/managing events, and I’m calling the API for getting the events for an entire world (yes, the entire world, and not just a map or event). However, to make everything timely, I’d like to call this fairly often, such as every minute or two (I’m sure users would love it to be every 30 seconds, but I don’t think anet would like that). With http compression, the bandwidth usage seems to be 40-50KB per call. Assuming 50KB and an update every minute, an hour’s bandwidth is ~3MB, and a day’s is ~72MB. Keep in mind that this is for each user.

So, are there API usage limits?

I’m also setting user-agent to uniquely identify my desktop program, so that anet can track it’s usage (however, since I’m probably the only one using it atm, the BW usage is probably tiny compared to everything else).

smiley.1438:

By simply typing “limit” into the search box, you’d find this:

The event data is updated in real-time, and there is no perceptible delay between an event state changing on the game server and it being updated in the API view.

Feel free to update as often as you’d like, though you probably don’t need to update more often than every 30 seconds or so.

also this: https://forum-en.guildwars2.com/forum/community/api/How-Often-I-can-Access-the-api/2071877

This information may be useful, too:

Yes, please cache the results!

The names are guaranteed to be static between game patches/builds, but it’s possible for the list to grow between patches. The Events API, like the Item API, only shows events and maps that have been discovered by players.

So the ‘perfect’ logic would be to cache the results, and only do a lookup after a patch or if you don’t have a cached result for a particular event ID. And you probably don’t really need to do it between patches, as event names will rarely if ever change after being created.

das moran fool.1072:

Thanks. For some reason, I tried everything but “limits”.

60 seconds it is.

Lil Puppy.5216:

This should be included in the Documentation sticky, you wouldn’t want someone overloading your api’s with 250ms calls now would you!

Most public API’s for larger companies have api usage limits, this should be documented so that you can also update the usage limit in the future when there are hundreds or thousands of apps and websites hitting you several times a minute.

Sorry for the bump but as a developer, this is important info.

Polarbear.5896:

Really, you could optimize an event list in a known chain [A,B,C] and if [A] is in ‘Warmup" or “Preparation” or “Active”, you don’t need to call events [B,C] until A reads “Success”, depending on how you’re programming it.

Healix.5819:

Really, you could optimize an event list in a known chain [A,B,C] and if [A] is in ‘Warmup" or “Preparation” or “Active”, you don’t need to call events [B,C] until A reads “Success”, depending on how you’re programming it.

You also need to factor in the previous state, since every event is a little different. For example, some events progress from active to warmup instead of success and some events will progress to preparation once their chain starts, while some change to warmup and others don’t change at all.

To actually know if an event was completed for example, you need to look for an active event that is no longer active. To check if it was completed successfully or not, you need to check for a failed condition or if the chained fail event is starting. In cases where the event actually failed, but the state progressed to warmup and the fail event is in warmup, it’s practically a guess unless you watch that event for the next few minutes to see if it becomes active.

Polarbear.5896:

Yeah if you want timers, I didn’t really need them for my desktop app however, just enough to display if they’re up, and I’m only pinging for 32 events that I try to get to a day.

Yamagawa.5941:

That’s great limits for the event APIs

I’m looking at releasing my own crafting tool, the ones I’ve looked at just have not catered well to people working to make a profit…

So at first launch, my tool spams the API for recipes and items tied to the recipes. This is a few thousand API hits, with no real delay imposed between hits (currently)

Once its finished that, it spams the API to get detail for every known item, so that it can match up items not in the recipe API.

Ideally, I’d like to get 200 or so results/hit, but I’m limited to one result a hit, and a total of tens of thousands of queries. One thread going full speed gets 4-6 hits a second for me… But it occurs to me…
That’s a kitten ton of server hits. The server may not like them (add a delay?). Users may not like waiting for them (add more threads?)

So:
Plan 1)
Find out formal limits for API access.
One hit every 30 seconds will simply not do

Plans 2,3,4,5) handle any user needs not met with plan 1.

Healix.5819:

You should be caching all items and recipes. You only need to use the API when you’re initially building the cache and when new items/recipes are added, which you could check once a day or week. The cache should be packaged with your app so the user doesn’t have to take the time to build it themselves.

The once every 30 seconds was simply a suggestion on how often to lookup an event. 30s is about 3x too long though, since some events can come and go that quickly.

There is no actual limit though. For example, when I build my item cache from scratch, I do about 130 calls per second. For events, I’ve been doing 23 per second for the last few weeks.

DarkSpirit.7046:

I agree with Healix, you should be caching all the items and recipes as they should hardly change.

I have a script that pulls down all the items and recipes to cache them into files, so my app doesn’t ever need to pull them from the web. This improves app performance significantly.

Yamagawa.5941:

Yes, I am already caching the data.
A pre built cache (plan 2) is part of this weeks work, as is plan 3 (priority loading).

Data changes. When recipes change, users will delete the recipe cache.
When items change, users will delete the item cache.

Regardless of any changes I make to cache data better, to load data smarter, kitten happens and the data needs loaded. How fast do I load it?

//Yamagawa

Healix.5819:

Some item data will rarely change, as those items are either buffed, nerfed or fixed. Most items will however never change and remain static for the life of the game. You could either never account for modified items, manually update these items by looking at patch notes or re-lookup item data based on how old that item’s cache is, for example, once a month.

For new items, you just need to pull the list of item IDs and check which IDs you don’t already have cached. You then only request and cache those IDs. No need to start from scratch.

As for how fast to make requests, make it simple, do 1 thread with a 0s delay. If they ever enforce a limit, it probably wouldn’t matter how fast you make calls, but rather how many you’ve made recently, for example a limit of 1000 requests per hour. If they ever did limit it, you’d basically just have to watch out for timed out or bad responses and delay for X minutes. Other parts of the GW2 site are limited for example, where after 10 or so requests, you start to get internal server errors. Both the leaderboards and the trading post (when selling) do this for example.

Also, SQLite for an easy way to cache using a database.

Yamagawa.5941:

So… keep the load process as I have it now. Test the DB cross-platform, fix as needed. Implement prioritized loading to improve start from blank DBs. Implement on demand refresh of select items to discourage dumping the DB. Save 4&5 for when I’m bored. If they implement access limits… Eh, my tools design already allows for graceful handling of server errors. And hope they allow soon for multiple items/recipes in a single request.

//Yamagawa

das moran fool.1072:

For my current program, which I’m rewriting into a web app, I’m sucking down all events for my server (currently 1665 events), once a minute, and I’m saving the historical event change data into a database (not all events, but only events when they change status). Right now, I’m keeping a week’s worth of data, and that’s currently over 330,000+ event status changes, just for one server (around ~30 event changes a minute). It’s really cool, in a geeky-sort-of-way, to see how often an event occurs, along with server resets.

I’d love to suck down all events for all servers, but my database server is running on my slow home fileserver, and there’s no way it could handle databases of that size.

Yamagawa.5941:

Oofh. That’s…. Ambitious. And I thought all items and recipes was big stuff…

Oh, what I could do with your data had I only the time…
I might start in the direction of http://www.torrenal.com/Census/ – plot the events on a map, and contrast how different servers hit events.
//Yamagawa