Hi, we have two versions of Flexmonster, one with a 330Mo csv file and one with a 750Mo json file, both of them doesn't work anymore with the last version of Chrome (80).
Chrome seems to stop the loading at ~260Mo and doesn't do anything after. Do you guys have anymore infos on that ? Maybe a workaround ? Both are in production env.
Thanks.
Hello, Seekme,
Thank you for reaching out.
We have run some tests on our end using the latest version of Chrome (80.0.3987.100), the latest version of Flexmonster (2.8.0) and a 497.6 MB CSV file.
Still, we have not managed to reproduce the same behavior you have encountered using our demo.
Could you please check your browser dev console for possible errors and share the results with us?
Also, it would greatly help us to make further progress in solving this case if you could provide a dummy dataset with which the issue is reproducible.
Looking forward to your reply.
Kind regards,
Vera
Thanks for your reply, I should have specify that my file is linked via the report -> datasource -> filename setting.
About the error, Chrome says "Paused before potential out-of-memory crash".
I'll try to make a jsfiddle later.
Hello, Seekme,
Thank you for your reply and for providing further details about the problem.
We would like to explain that when working with large datasets an "Out of Memory" situation is bound to happen at some point.
Being a client-side component, Flexmonster relies on resources available to the browser, and this affects the loading time and the maximum size of the data that can be handled on every particular machine.
This means that the client computer's RAM determines how much data can be loaded at once, and CPU capabilities affect how much time is spent on the data analyzation.
Some client machines may just not handle loading a large dataset into the browser at once.
We are glad to announce that a more efficient approach for working with large datasets has been introduced in the new major release version 2.8 of Flexmonster. This is provided via the Custom Data Source API. This approach greatly improves the performance and provides full control over how the data is processed. Our team highly recommends considering the Custom Data Source API in case a large dataset is being used.
Please see the following guides for more information about this new approach:
2.1) What the Custom Data Source API has to offer.
2.2) Introduction to the Custom Data Source API.
2.3) A sample implementation of the Custom Data Source API approach with Node.js.
2.4) A sample implementation of the Custom Data Source API approach with .Net Core.
2.5) How to implement your own implementation of the Custom Data Source API approach.
We hope this helps.
Please feel free to reach out to us if additional questions arise.
Kind regards,
Vera
Nice ! I'm trying to add it to my project and am stucked after the "fields" request and I can't find any exemple in the doc. Is it too early ?
Thanks.
Hello, Seekme,
Thank you for your response.
Could you please specify which step you are facing difficulties with?
Also, we would like to bring to your attention that our team has prepared sample implementations of the Custom Data Source API approach, the .NET Core sample server being one of them. You are welcome to use our .NET Core implementation as a base for your project.
Hope this helps.
Please let us know if you have any further questions.
Kind regards,
Vera