Eowin Of Rohan.2619:

Hi,

Since Clockwork Chaos update, the event list has become much bigger : all the Scarlet attacks sub-events have entries in the API : https://api.guildwars2.com/v1/events.json

There are 1282 of these (+13 Scarlet meta events), and we probably don’t need them in API since they generate a huge amount of data which doesn’t provide any useful information.

It would be nice to have another API, say events2.json, providing the same information but filtering all events that are not necessary.
This include the aforementioned 1282 scarlet events, but also inaccessible and removed events (Crown’s Pavilion, Labyrinthine Cliffs, Queen’s Champion/Escorts …). For some reason, the latter still have entries in the API and some events are even Active while I write this :
http://www.gw2events.me/events/events.php?mode=5&map=922
http://www.gw2events.me/events/events.php?mode=5&map=929
http://www.gw2events.me/events/events.php?mode=7&list=1

My code is quite well optimized and can handle more data, I have no performance issue updating 200,000 events every minute and consolidating the status change times. BUT that data takes longer and longer to download and this is becoming an issue. (btw, from 1600*51 events when the API was created, there are now 4400*51 events … Scarlet is probably not the only responsible)

Renting a server with a better&guaranteed bandwidth would cost 3 times as much as my current server. And since I don’t get a single € or $ from adds as of now, this is not an option.


In addition to this generic blacklist filtered API, it would be nice to have a webservice providing whitelist filtered status.
https://api.guildwars2.com/v1/events.json?event_id=XXXXX is nice but if you have to check ~40 specific events, you have to call ~40 json downloads. “wget -i” option makes it much faster but it’s still very slow compared to a single download.
I use this to update some fast dieing world bosses and some of their pre-events every 20 seconds. It would be nice to call a webservice, providing my list of event IDs and getting the full status lists in a single json file.


Eowyn

edit : copy-paste the links above instead of clicking on them. The forum redirection tool is bugged and replaces the character ‘&’ with ‘&’ so urls are broken.

Khisanth.2948:

Since you brought up cost … it currently cost you less than $3 USD /mo for hosting? O_o

Anyway … invasions are still going on so removing those wouldn’t make much sense. Crown Pavilion and Labyrinthine Cliffs are both returning as well.

The simplest thing you can do to reduce the bandwidth would be requesting a compressed copy of the JSON if you aren’t already doing that. It is reduced to about 1/5th of the uncompressed version. I am only bringing this up because you mentioned wget and it is a bit lacking there so you might not be doing that.

Killer Rhino.6794:

As Khisanth said, the events are still going on, so by definition, they should be returned by the API.

Much of the concerns you’re raising, while valid, seem very application-specific problems, and not necessarily the fault of the API service.

That being said, I agree (but not necessarily advocating for change) that it would be nice if there were some grouping of the events aggregated within the API. The community provided dragon event IDs come to mind, but if there was an endpoint that returned event_group_ids for various collections (such as dragon event IDs, crown pavilion, labyrinthine cliffs, scarlet invasions, etc.), it could be something useful, perhaps.

Eowin Of Rohan.2619:

@Khisanth :
That’s 18€/month for a server (which is necessary because I run TS3, and because I’m using unix shell batches to update my database with API information, 2 things that are impossible if I use a simple php/mysql shared hosting plan).
I have a “best effort” 100mbps bandwidth. Would have to pay 50€/month to get a server with a “guaranteed” 200mbps.

Also, I know that invasions are still going on (which is why I didn’t even speak about rolling back to the way the API worked before they added “inactive” status – which would have prevented labyrinthine events from appearing in the json but not the scarlet ones).
But, unless you want to draw a live map, the only events status useful for scarlet invasions are the meta events (13 different ones : http://events.gw2organizer.com/events/events.php?mode=7&list=2), not all of those 1282 red events.
About crown pavilion and labyrinthine cliffs events, they may return, which is why they should not be removed from the reference (event_names.json); but in the mteantime we don’t need their status.
For these reasons, I was asking for a second json which would be pre-filtered, keeping the existing one as-is, so people who want to draw full live maps for scarlet can still do it.

Btw, tyvm for the compression tip. I didn’t know it was possible to request a compressed version of a file through https. I’ll test this !!


@Killer :
Grouping IDs are a very good idea. This idea could cover both my “filtered api” and my “white list request” needs.

- all “normal & permanent” events = ID 1
- temporary-content or other living story related events = one ID for each release.
- all world bosses (and pre events ?) = another ID
- …

I would be able to call ID1 + currently active temporary content IDs every minute, and WB’s ID every 20 seconds.

Khisanth.2948:

I was thinking of DreamHost VPS/Linode. Their cheapest plans meets what you described. DreamHost has no limits and inbound data is free for Linode so cost doesn’t change no matter how big the JSON file grows.

AdelphiA.9287:

What I would say is get a decent NAS and run your own host! That then costs nothing except your normal broadband costs.

Eowin Of Rohan.2619:

Very bad idea.

Drakma.1549:

I was thinking of DreamHost VPS/Linode. Their cheapest plans meets what you described. DreamHost has no limits and inbound data is free for Linode so cost doesn’t change no matter how big the JSON file grows.

I may have to do something like this.

gw2stats.net recently crossed the 5TB bandwidth mark with my current hosting provider. I never expected that much traffic and I had a 500GB cap :/