I’m a developer at an educational organization using LiveWhale to sync events from several linked calendars (via .ics). I’m running into two issues:
1) Linked calendar tags not updating
I recently updated our script that generates the .ics so certain events include new tags.
I can confirm those tags are present in the .ics file, but they do not appear on events in our existing linked calendar.
As a test, I created a new linked calendar pointing at the same feed and the events do pull in with the new tags, which suggests a cache or indexing issue tied to the original feed.
Question: Is there a non-destructive way to refresh/reindex the original linked calendar so the tags update without using “Reset all events in the feed” (we want to avoid losing manual edits in LiveWhale)?
If there’s a way to clear only the feed cache or force a metadata refresh for existing items, that would be ideal.
In production, a single run can fetch ~1,200 events with no errors.
On dev and local, the same process starts returning HTTP 429 (Too Many Requests) after a few hundred requests.
Questions:
Are rate limits applied by client IP, environment, or any trust/allow-list rules? (Prod and the API share the same domain; dev/local use different egress IPs.)
Could we allow-list our dev/Jenkins IPs or get higher limits there?
Do you publish recommended throughput (e.g., requests per second) or a Retry-After policy we should follow?
Thanks for writing – by default, Linked Calendars only sync tags (and event types, and a few other criteria that don’t have a dedicated “Customize” checkbox on the per-event level) only when an event is first created. This is so, if a user goes in to edit the LiveWhale version of the event, we don’t undo or overwrite their changes in the hourly sync from the .ics file.
Given that, we don’t have a way within the UI to say “No, actually sync these tags” onto existing events in a Linked Calendar. Some folks accomplish this using a custom module and the onAfterSync handler— you can see an example of this from our documentation here, if that’s something you wanted to tinker with on your dev site: onAfterSync - LiveWhale Support (Or, we’re always happy to help with that kind of project via the Request Help Form).
(2)
I’m not sure offhand when the 429 Too Many Requests error gets sent, I’d have to check that with our hosting team. It’s likely though that production has additional caching enabled that the development site (which generally are in Development Mode) don’t, so that could be why the prod site is handling your API traffic better than dev.
From the API docs:
While there are no explicit API usage limits currently enforced, we ask that you please maintain a request rate of less than 1 request per second and implement caching for your API integrations, especially for data that changes infrequently.
In general, our JSON API v2 can contain (paginated) results for lots of events in just a few requests – I wonder if you may have inherited an integration built a few years ago when that wasn’t possible? Something like /live/json/v2/events/ with a few add’l criteria I think would get you everything with many fewer requests, that might be something to consider. Then you wouldn’t need an individual live/events/{livewhale_id}@JSON request for each event.
Thank you very much for the reply. I will definitely checkout onAfterSync solution when I have time. For the api with /live/json/v2/events/, I tested it a lot. It indeed contains a lot of information, but just one issue here. For the date2_ts, which represents the ending time of the event, often present as “null” even there is a value for date2_ts in the live/events/{livewhale_id}@JSON api. Let is why I used only live/events/{livewhale_id}@JSON to ensure all events fetched with proper time frame.
Thanks Ryan – it might be easier to spot with an example to compare, but in general, date2_ts are null in the JSON v2 endpoints when it’s an event attached to a single point in time (either an all day event, or only start time is provided).
Where you might see the difference is in longer multi-day events? In those cases, the JSON list view will generally just give you “each day’s” version of the event (i.e., for an event that started last week and goes to next week, today’s entry will be a single all day event that spans the entirety of today). If you’d like the full span of the original event, you can use the repeats_start and repeats_end value (and /hide_repeats/true/ in your request to just get one copy of each multi-day or repeating event). Hope this helps!
I’m looking for the answer to Ryan’s second question about 429s as well.
We have an application that runs once a month that checks events for “event quality” and emails our calendar publishers with a report on the quality of their event content.
We use the event detail API because we want to look at the related_content field for each event and that is not included in /live/json/v2/events/.
It would be helpful to know what we need to do to avoid triggering a 429 in the short term so that we can get the report out for this month.
Longer term, it would be great if we could get related_content without requesting each event individually. I’m hoping I’ve missed something in the documentation and that there is a way to do so. I think we would be happy with just getting the info about whether an event has related content or not.
Great question – we’re aligned on the idea that JSON API v2 ideally should contain all possible data fields, I think the missing related_content was an oversight. Let me check with our team to see how difficult that would be to incorporate, and then I’ll circle back to your other question about 429s in the short-term.
We’ve found the spot where “related_content” had been left off the list of possible JSON API v2 response fields and are including it in our next release. Thanks for bringing this to our attention! Please let me know if I can speak to any other API questions or issues folks are encountering.