Summary
Data ingestion is a broad topic, and there is no one way to approach this challenge. It all depends on your latency requirements, your data source types, how much control you want to have over the data ingestion process, and other factors.
First, you learned the steps in the data loading process, which we explored later in the chapter. You learned about retention policies, and how to think about the implications and benefits of keeping large volumes of data for long periods of time.
Next, you learned about the streaming and batching ingestion strategies, when to use which, and the implications of enabling streaming ingestion in your Data Explorer pool. You learned the conditions that cause batching ingestion to trigger, and how to set these conditions by using a batching policy.
Finally, you learned in detail how to implement data ingestion by using KQL control commands, an Azure Synapse pipeline, and continuously ingesting files as they are created in an ADLS container...