joshflosh.4517:

Hey,

i am currently programming an app for gw2 and when downloading the item-lists (/v2/items/?page=..) i noticed that your server don’t seem to support persistent connection and therefore also no HTTP Pipelining. Would it be possible for you guys to enable those, as that could drastically improve the performance of fetching the whole item-list?

On a sidenote: an even better solution would be, if you could enable ?ids=all for that endpoint. But since i don’t know your backend infrastructure, that could just be server limitations.

Btw i love the API v2, it already improved performance by a lot, thank you for that!

smiley.1438:

On a sidenote: an even better solution would be, if you could enable ?ids=all for that endpoint.

I guess the servers would then explode…

I’d recommend you something like RollingCurl which speeds up the whole process (a few minutes on a fast connection).

https://github.com/codemasher/gw2-database/blob/master/classes/rollingcurl.class.php
https://github.com/codemasher/gw2-database/blob/master/classes/gw2items.class.php#L186

joshflosh.4517:

A few minutes? Maybe im misreading something there, but i currently fetch all items in 15 seconds^^

smiley.1438:

So where’s the problem then? (We’re talking about ~38k items and ~150MB DB size)

I’ve a very slow connection, so i can only estimate – there have been reports from like a couple seconds to a few minutes. However, it also depends on the amount of languages you pull from the DB.

joshflosh.4517:

Well it’s not a problem, just a question, since i care about optimization. 150MB? Are we talking about the same thing?

Pat Cavit.9234:

With our current setup keep-alive/pipelining isn’t possible, sorry.

/v2/items?ids=all would be a massive response and probably make our servers very, very unhappy.

joshflosh.4517:

Ok i thought so, thanks for the response nonetheless!

StevenL.3761:

I always thought that the problem with ids=all is that it runs synchronously. I don’t experience any server hiccups when I send hundreds of smaller requests (200 ids) in parallel. Shouldn’t synchronous requests for the same data actually be less demanding?

Unless the problem is that you buffer every json response in memory before you start writing to the underlying socket. I don’t know how difficult it would be to stream the response, but that would give you an immediate, noticeable performance boost.