{"id":1400,"date":"2022-01-20T12:01:53","date_gmt":"2022-01-20T11:01:53","guid":{"rendered":"https:\/\/www.loicmathieu.fr\/wordpress\/?p=1400"},"modified":"2023-04-05T17:02:55","modified_gmt":"2023-04-05T15:02:55","slug":"jai-enfin-pris-le-temps-de-tester-apache-pinot","status":"publish","type":"post","link":"https:\/\/www.loicmathieu.fr\/wordpress\/informatique\/jai-enfin-pris-le-temps-de-tester-apache-pinot\/","title":{"rendered":"I finally took the time to test Apache Pinot"},"content":{"rendered":"<p>I&#8217;ve been wanting to test Apache Pinot for a very long time and I finally took the time to do it!<\/p>\n<h2>First, a quick description of Pinot<\/h2>\n<p>Pinot is a real-time distributed OLAP datastore, purpose-built to provide ultra low-latency analytics, even at extremely high throughput. It can ingest directly from streaming data sources or batch data sources.\nAt the heart of the system is a columnar store, with several smart indexing and pre-aggregation techniques for low latency.\nPinot was built by engineers at LinkedIn and Uber and is designed to scale up and out with no upper bound. Performance always remains constant based on the size of your cluster and an expected query per second (QPS) threshold.<\/p>\n<img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" src=\"https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/Pinot-architecture.jpg?resize=606%2C475&#038;ssl=1\" alt=\"\" width=\"606\" height=\"475\" class=\"alignnone size-full wp-image-1407\" srcset=\"https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/Pinot-architecture.jpg?w=606&amp;ssl=1 606w, https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/Pinot-architecture.jpg?resize=300%2C235&amp;ssl=1 300w, https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/Pinot-architecture.jpg?resize=344%2C270&amp;ssl=1 344w\" sizes=\"auto, (max-width: 606px) 100vw, 606px\" \/>\n<p>A Pinot cluster consists of the following elements:<\/p>\n<ul><li><strong>Pinot Controller<\/strong>: composed of Apache Helix (cluster management) and Apache Zookeeper (coordination), it is the central component of Pinot that will take care of cluster orchestration, replication, and state management of the various cluster components.<\/li>\n\n<li><strong>Pinot Broker<\/strong>: receives client requests and routes their executions to one or more Pinot servers before returning a consolidated response.<\/li>\n\n<li><strong>Pinot Server<\/strong> : stores segments (part of a table) and executes queries. A server can be either <strong>real-time<\/strong> (in case of streaming data) or <strong>offline<\/strong> (in case of batched and immutable data).<\/li>\n\n<li><strong>Pinot Minion<\/strong> : optional component allowing to run background tasks within the cluster, for example for data purging.<\/li>\n<\/ul>\n<h2>First launch<\/h2>\n<p>So let&#8217;s start at the beginning: launching Pinot locally! Apache Pinot being a distributed system with several components (Zookeeper, Pinot Controller, Pinot Broker, Pinot Server), I decided to go through an all-in-one Docker image to test it locally, it seems to be the simplest way.<\/p>\n<p>Based on the guide <a href=\"https:\/\/docs.pinot.apache.org\/basics\/getting-started\/running-pinot-in-docker\" rel=\"noopener\" target=\"_blank\">Getting Started <\/a>, I run Pinot with this Docker command which allows it to be started with a pre-imported Baseball stat dataset. The container, after having instantiated the components and imported the data, will then execute a set of queries and display their results in the logs.<\/p>\n<pre>\ndocker run \\\n    -p 9000:9000 \\\n    apachepinot\/pinot:0.9.3 QuickStart \\\n    -type batch\n<\/pre>\n<p>After starting Pinot and importing the dataset, I can use the Pinot console (available on port 9000 by default) to access the cluster.<\/p>\n<p>This allows you to view the status of the cluster via the <strong>Cluster Management<\/strong> tab.<\/p>\n<img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" src=\"https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/pinot-cluster-view.png?resize=640%2C339&#038;ssl=1\" alt=\"\" width=\"640\" height=\"339\" class=\"alignnone size-large wp-image-1408\" srcset=\"https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/pinot-cluster-view.png?resize=1024%2C542&amp;ssl=1 1024w, https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/pinot-cluster-view.png?resize=300%2C159&amp;ssl=1 300w, https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/pinot-cluster-view.png?resize=768%2C406&amp;ssl=1 768w, https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/pinot-cluster-view.png?resize=1536%2C813&amp;ssl=1 1536w, https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/pinot-cluster-view.png?resize=510%2C270&amp;ssl=1 510w, https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/pinot-cluster-view.png?w=1852&amp;ssl=1 1852w, https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/pinot-cluster-view.png?w=1280&amp;ssl=1 1280w\" sizes=\"auto, (max-width: 640px) 100vw, 640px\" \/>\n<p>And launch queries via the <strong>Query Console<\/strong> tab. I make a <code>count(*)<\/code> of the created <strong>baseballStats<\/strong> table, the query is executed almost immediately (a few milliseconds) but at the same time, there is only 97889 lines so it&#8217;s normal.<\/p>\n<img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" src=\"https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/pinot-first-query.png?resize=640%2C274&#038;ssl=1\" alt=\"\" width=\"640\" height=\"274\" class=\"alignnone size-large wp-image-1409\" srcset=\"https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/pinot-first-query.png?resize=1024%2C438&amp;ssl=1 1024w, https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/pinot-first-query.png?resize=300%2C128&amp;ssl=1 300w, https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/pinot-first-query.png?resize=768%2C329&amp;ssl=1 768w, https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/pinot-first-query.png?resize=1536%2C657&amp;ssl=1 1536w, https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/pinot-first-query.png?resize=604%2C258&amp;ssl=1 604w, https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/pinot-first-query.png?w=1828&amp;ssl=1 1828w, https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/pinot-first-query.png?w=1280&amp;ssl=1 1280w\" sizes=\"auto, (max-width: 640px) 100vw, 640px\" \/>\n<h2>Analysis of European carbon emissions<\/h2>\n<p>Alright, that&#8217;s nice, but to get past Hello World and really test Pinot, I&#8217;m going to need a bigger dataset to play with.<\/p>\n<p>The European Commission provides a set of Open Data, now is the time to take advantage of it.<\/p>\n<p>I choose a dataset on greenhouse gas emissions in the Eurozone (you can find it at <a href=\"https:\/\/ec.europa.eu\/eurostat\/web\/climate-change \/data\/database\" rel=\"noopener\" target=\"_blank\">this page<\/a>): <a href=\"https:\/\/ec.europa.eu\/eurostat\/estat-navtree-portlet-prod\/BulkDownloadListing ?file=data\/env_ac_ainah_r2.tsv.gz\" rel=\"noopener\" target=\"_blank\">Air emissions accounts by NACE Rev. 2 activity<\/a> of 4 million data points.<\/p>\n<p>Here is the description of the dataset: <em>This data set reports the emissions of greenhouse gases and air pollutants broken down by 64 industries (classified by NACE Rev. 2) plus households. Concepts and principles are the same as in national accounts. Complete data starts from reference year 2008<\/em>.<\/p>\n<p>We will now have to load the data: each column is separated by a tabulation, the absence of data is defined by <code>:<\/code>, the first column containing the NACE category it would be great to separate it into four columns: the labels are <strong>airpol,nace_r2,unit,geo\\time<\/strong>.<\/p>\n<p>To send this data to Pinot, I relied on the guide <a href=\"https:\/\/docs.pinot.apache.org\/basics\/getting-started\/pushing-your-data-to-pinot\" rel=\"noopener\" target=\"_blank\">Pushing your data to Pinot<\/a>.<\/p>\n<p>To start, you have to define a schema for the data, here is the one I used:<\/p>\n<pre>\n{\n  \"schemaName\": \"greenhouseGazEmission\",\n  \"dimensionFieldSpecs\": [\n    {\n      \"name\": \"airpol\",\n      \"dataType\": \"STRING\"\n    },\n    {\n      \"name\": \"nace_r2\",\n      \"dataType\": \"STRING\"\n    },\n    {\n      \"name\": \"unit\",\n      \"dataType\": \"STRING\"\n    },\n    {\n      \"name\": \"geo\",\n      \"dataType\": \"STRING\"\n    }\n  ],\n  \"metricFieldSpecs\": [\n    {\n      \"name\": \"2020\",\n      \"dataType\": \"FLOAT\"\n    },\n    {\n      \"name\": \"2019\",\n      \"dataType\": \"FLOAT\"\n    },\n    [...] \/\/ repeate the same for all fields down to 1995\n  ]\n}\n<\/pre>\n<p>In this schema, we define two types of fields, dimension fields: these are strings on which we will be able to filter or group the data, and metric fields which are here all floats (one field per year of data available) on which we will be able to make calculations (aggregations).<\/p>\n<p>You must then define the table where to store the data, here is the definition of the table, it is for the moment very straightforward:<\/p>\n<pre>\n{\n  \"tableName\": \"greenhouseGazEmission\",\n  \"segmentsConfig\" : {\n    \"replication\" : \"1\",\n    \"schemaName\" : \"greenhouseGazEmission\"\n  },\n  \"tableIndexConfig\" : {\n    \"invertedIndexColumns\" : [],\n    \"loadMode\"  : \"MMAP\"\n  },\n  \"tenants\" : {\n    \"broker\":\"DefaultTenant\",\n    \"server\":\"DefaultTenant\"\n  },\n  \"tableType\":\"OFFLINE\",\n  \"metadata\": {}\n}\n<\/pre>\n<p>To be able to create this table, I restarted a Pinot cluster from the <a href=\"https:\/\/docs.pinot.apache.org\/basics\/getting-started\/running-pinot-in-docker#docker-compose\" rel=\"noopener\" target=\"_blank\">Docker Compose proposed in the documentation<\/a>, then used the following Docker command which will launch a table creation command with the schema and table definitions previously created:<\/p>\n<pre>\ndocker run --rm -ti \\\n    --network=pinot_default \\\n    -v ~\/dev\/pinot\/data:\/tmp\/pinot-quick-start \\\n    --name pinot-batch-table-creation \\\n    apachepinot\/pinot:0.9.3 AddTable \\\n    -schemaFile \/tmp\/pinot-quick-start\/greenhousGazEmission-schema.json \\\n    -tableConfigFile \/tmp\/pinot-quick-start\/greenhousGazEmission-table.json \\\n    -controllerHost manual-pinot-controller \\\n    -controllerPort 9000 -exec\n<\/pre>\n<p>We can then check via the Pinot console (<a href=\"http:\/\/localhost:9000\">http:\/\/localhost:9000<\/a>) that the table has been created.<\/p>\n<p>Now, time to insert the data!<\/p>\n<p>The file is a TSV and contains <code>:<\/code> when there is no data, to prepare it for ingestion into Pinot we will execute a <code>sed<\/code> command to transform it into CSV (therefore replacing the tabulations by commas) and delete certain incorrect characters, this <code>sed<\/code> command will also modify the line of field names which must then be returned to those expected by the schema.<\/p>\n<pre>\nsed 's\/\\t\/,\/g;s\/:\/\/g;s\/[pseb]\/\/g;s\/[[:blank:]]\/\/g' env_ac_ainah_r2.tsv &gt; data\/env_ac_ainah_r2.csv\n<\/pre>\n<p>The ingestion into Pinot is done via an ingestion job defined in yaml format.<\/p>\n<pre>\nexecutionFrameworkSpec:\n  name: 'standalone'\n  segmentGenerationJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentGenerationJobRunner'\n  segmentTarPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentTarPushJobRunner'\n  segmentUriPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentUriPushJobRunner'\njobType: SegmentCreationAndTarPush\ninputDirURI: '\/tmp\/pinot-quick-start'\nincludeFileNamePattern: 'glob:**\/*.csv'\noutputDirURI: '\/tmp\/pinot-quick-start\/segments\/'\noverwriteOutput: true\npinotFSSpecs:\n  - scheme: file\n    className: org.apache.pinot.spi.filesystem.LocalPinotFS\nrecordReaderSpec:\n  dataFormat: 'csv'\n  className: 'org.apache.pinot.plugin.inputformat.csv.CSVRecordReader'\n  configClassName: 'org.apache.pinot.plugin.inputformat.csv.CSVRecordReaderConfig'\ntableSpec:\n  tableName: 'greenhouseGazEmission'\n  schemaURI: 'http:\/\/pinot-controller:9000\/tables\/greenhouseGazEmission\/schema'\n  tableConfigURI: 'http:\/\/pinot-controller:9000\/tables\/greenhouseGazEmission'\npinotClusterSpecs:\n  - controllerURI: 'http:\/\/pinot-controller:9000'\n<\/pre>\n<p>This job defines among other things:<\/p>\n<ul><li><code>jobType: SegmentCreationAndTarPush<\/code>: the job will create a table segment. A segment is a partition of the table data. If your data set is large, you will have to cut the CSV file to be able to launch several jobs and obtain several segments.<\/li>\n\n<li><code>inputDirURI<\/code> and <code>includeFileNamePattern<\/code> to define where to look up for the CSV(s) to load data from.<\/li>\n\n<li><code>recordReaderSpec<\/code> which defines the CSV data format.<\/li>\n\n<li><code>tableSpec<\/code> which defines the target table specification, the one we defined earlier.<\/li>\n<\/ul>\n<p>To launch the job, you can use the following Docker command:<\/p>\n<pre>\ndocker run --rm -ti \\\n    --network=pinot_default \\\n    -v ~\/dev\/pinot\/data:\/tmp\/pinot-quick-start \\\n    --name pinot-data-ingestion-job \\\n    apachepinot\/pinot:0.9.3 LaunchDataIngestionJob \\\n    -jobSpecFile \/tmp\/pinot-quick-start\/job-ingest.yml\n<\/pre>\n<p>After ingestion, we now have 266456 rows in our table which we can then query from the Pinot console.<\/p>\n<p>For example via the following query: <code>select geo, sum(2019), sum(2020), sum(2021) from greenhouseGazEmission group by geo<\/code>.<\/p>\n<p>Pinot uses Apache Calcite for querying, so we can use ANSI SQL which greatly simplifies querying. The above query will return the following results:<\/p>\n<img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" src=\"https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/pinot-carbonn-footprint-query.png?resize=640%2C447&#038;ssl=1\" alt=\"\" width=\"640\" height=\"447\" class=\"alignnone size-large wp-image-1410\" srcset=\"https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/pinot-carbonn-footprint-query.png?resize=1024%2C715&amp;ssl=1 1024w, https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/pinot-carbonn-footprint-query.png?resize=300%2C209&amp;ssl=1 300w, https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/pinot-carbonn-footprint-query.png?resize=768%2C536&amp;ssl=1 768w, https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/pinot-carbonn-footprint-query.png?resize=387%2C270&amp;ssl=1 387w, https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/pinot-carbonn-footprint-query.png?w=1249&amp;ssl=1 1249w\" sizes=\"auto, (max-width: 640px) 100vw, 640px\" \/>\n<p>Since the dataset is quite small, queries run in milliseconds despite no index have been created.<\/p>\n<p>Now let&#8217;s try the following query: <code>select sum(2019), sum(2020), sum(2021) from greenhouseGazEmission where unit = 'G_HAB'<\/code>, it runs in 24ms, and we can see the following information in the result output:<\/p>\n<ul><li>numDocsScanned: 66614<\/li>\n\n<li>totalDocs: 266456<\/li>\n\n<li>numEntriesScannedInFilter: 266456<\/li>\n<\/ul>\n<p>Pinot has therefore scanned 66614 documents out of 266456, the documents which correspond to the G_HAB unit. During the filter phase, it scanned 266456 entries, it did a full scan on the table.<\/p>\n<h2>Optimization of queries thanks to indexes<\/h2>\n<p>Pinot allows the addition of indexes to optimize the queries, in the query previously used, the unit column was used to filter the data.<\/p>\n<p>So I&#8217;m going to modify the table structure to add an inverted index on the unit column.<\/p>\n<p>To do this we can modify the description of the table to add this index:<\/p>\n<pre>\n{\n  \"tableName\": \"greenhouseGazEmission\",\n  \"tableIndexConfig\" : {\n    \"invertedIndexColumns\" : [\"unit\"],\n    \"loadMode\"  : \"MMAP\"\n  },\n  [...]\n}\n<\/pre>\n<p>For simplicity, I used the Pinot GUI to add the index (by editing the table) and then reload the segments. Reloading segments after a table change is necessary for the index to be created and updated.<\/p>\n<p>After relaunching the query, we see that the <strong>numEntriesScannedInFilter<\/strong> goes to 0, Pinot is now using the newly created index.<\/p>\n<p>One of the strengths of Pinot is that it supports many different types of indexes, which makes it possible to implement different use cases, and to optimize for query or storage, each index using disk space. .<\/p>\n<p>To go further on Pinot&#8217;s indexing capabilities, you can read my article: <a href=\"https:\/\/www.loicmathieu.fr\/wordpress\/informatique\/apache-pinot-et-de-ses-differents-types-dindexes\/\" title=\"Apache Pinot and its various types of indexes\">Apache Pinot and its various types of indexes<\/a>..<\/p>\n<p><\/p>","protected":false},"excerpt":{"rendered":"<p>I&#8217;ve been wanting to test Apache Pinot for a very long time and I finally took the time to do it! First, a quick description of Pinot Pinot is a real-time distributed OLAP datastore, purpose-built to provide ultra low-latency analytics, even at extremely high throughput. It can ingest directly from streaming data sources or batch data sources. At the heart of the system is a columnar store, with several smart indexing and pre-aggregation techniques for low latency. Pinot was built&#8230;<p class=\"read-more\"><a class=\"btn btn-default\" href=\"https:\/\/www.loicmathieu.fr\/wordpress\/informatique\/jai-enfin-pris-le-temps-de-tester-apache-pinot\/\"> Read More<span class=\"screen-reader-text\">  Read More<\/span><\/a><\/p><\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"activitypub_content_warning":"","activitypub_content_visibility":"","activitypub_max_image_attachments":4,"activitypub_interaction_policy_quote":"anyone","activitypub_status":"","footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[9],"tags":[203,204,202],"class_list":["post-1400","post","type-post","status-publish","format-standard","hentry","category-informatique","tag-database","tag-olap","tag-pinot"],"aioseo_notices":[],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"jetpack-related-posts":[{"id":1497,"url":"https:\/\/www.loicmathieu.fr\/wordpress\/informatique\/apache-pinot-et-de-ses-differents-types-dindexes\/","url_meta":{"origin":1400,"position":0},"title":"Apache Pinot and its various types of indexes","author":"admin","date":"Thursday September 15th, 2022","format":false,"excerpt":"Some time ago, I finally took the time to test Apache Pinot, you can find the story of my first experiments here. Apache Pinot is a distributed real-time OnLine Analytical Processing (OLAP) datastore specifically designed to provide ultra-low latency analytics, even at extremely high throughput. If you don't know it,\u2026","rel":"","context":"In &quot;informatique&quot;","block_context":{"text":"informatique","link":"https:\/\/www.loicmathieu.fr\/wordpress\/category\/informatique\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/star-tree.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/star-tree.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/star-tree.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/loicmathieu.fr\/wordpress\/wp-content\/uploads\/star-tree.png?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":1508,"url":"https:\/\/www.loicmathieu.fr\/wordpress\/informatique\/4-ans-chez-zenika\/","url_meta":{"origin":1400,"position":1},"title":"(Fran\u00e7ais) 4 ans chez Zenika","author":"admin","date":"Tuesday September  6th, 2022","format":false,"excerpt":"Sorry, this entry is only available in Fran\u00e7ais.","rel":"","context":"In &quot;informatique&quot;","block_context":{"text":"informatique","link":"https:\/\/www.loicmathieu.fr\/wordpress\/category\/informatique\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":1674,"url":"https:\/\/www.loicmathieu.fr\/wordpress\/informatique\/devoxx-fr-2023-foundation-db-le-secret-le-mieux-garde-des-nouvelles-architectures-distribuees-par-pierre-zemb-et-steven-le-roux\/","url_meta":{"origin":1400,"position":2},"title":"(Fran\u00e7ais) Devoxx FR 2023 &#8211; FoundationDB : le secret le mieux gard\u00e9 des nouvelles architectures distribu\u00e9es ! par Pierre Zemb et Steven Le Roux","author":"admin","date":"Monday April 17th, 2023","format":false,"excerpt":"Sorry, this entry is only available in Fran\u00e7ais.","rel":"","context":"In &quot;informatique&quot;","block_context":{"text":"informatique","link":"https:\/\/www.loicmathieu.fr\/wordpress\/category\/informatique\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":19,"url":"https:\/\/www.loicmathieu.fr\/wordpress\/informatique\/ruby-on-rails\/","url_meta":{"origin":1400,"position":3},"title":"Ruby on Rails","author":"admin","date":"Thursday February 15th, 2007","format":false,"excerpt":"Bonjour, d'habitude au ton bucolique des vacances ou revendicatif des coups de gueules, aujourd'hui le ton de ce post va \u00eatre technophile. En effet, je bosse dans l'informatique qui est donc un de mes centre d'int\u00e9r\u00eat, et je vous livre ici mon premier message sur les nouvelles technologies. J'ai tester\u2026","rel":"","context":"In &quot;informatique&quot;","block_context":{"text":"informatique","link":"https:\/\/www.loicmathieu.fr\/wordpress\/category\/informatique\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":419,"url":"https:\/\/www.loicmathieu.fr\/wordpress\/informatique\/chti-jug-nosql\/","url_meta":{"origin":1400,"position":4},"title":"Ch&#8217;ti JUG : NoSQL","author":"admin","date":"Monday December 20th, 2010","format":false,"excerpt":"Le 2 d\u00e9cembre s'est tenu dans les locaux de l'IUT A de Lille une session du Ch'ti JUG sur les technologie NoSQL anim\u00e9 par Olivier Mallassi. L'intervenant a commenc\u00e9 la conf\u00e9rence par un bref historique de la mani\u00e8re dont les donn\u00e9es on \u00e9t\u00e9 stock\u00e9es dans le monde de l'informatique: Au\u2026","rel":"","context":"In &quot;informatique&quot;","block_context":{"text":"informatique","link":"https:\/\/www.loicmathieu.fr\/wordpress\/category\/informatique\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":966,"url":"https:\/\/www.loicmathieu.fr\/wordpress\/informatique\/1-an-chez-zenika\/","url_meta":{"origin":1400,"position":5},"title":"(Fran\u00e7ais) 1 an chez Zenika","author":"admin","date":"Tuesday September  3rd, 2019","format":false,"excerpt":"Sorry, this entry is only available in Fran\u00e7ais.","rel":"","context":"In &quot;informatique&quot;","block_context":{"text":"informatique","link":"https:\/\/www.loicmathieu.fr\/wordpress\/category\/informatique\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"_links":{"self":[{"href":"https:\/\/www.loicmathieu.fr\/wordpress\/wp-json\/wp\/v2\/posts\/1400","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.loicmathieu.fr\/wordpress\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.loicmathieu.fr\/wordpress\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.loicmathieu.fr\/wordpress\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.loicmathieu.fr\/wordpress\/wp-json\/wp\/v2\/comments?post=1400"}],"version-history":[{"count":2,"href":"https:\/\/www.loicmathieu.fr\/wordpress\/wp-json\/wp\/v2\/posts\/1400\/revisions"}],"predecessor-version":[{"id":1666,"href":"https:\/\/www.loicmathieu.fr\/wordpress\/wp-json\/wp\/v2\/posts\/1400\/revisions\/1666"}],"wp:attachment":[{"href":"https:\/\/www.loicmathieu.fr\/wordpress\/wp-json\/wp\/v2\/media?parent=1400"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.loicmathieu.fr\/wordpress\/wp-json\/wp\/v2\/categories?post=1400"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.loicmathieu.fr\/wordpress\/wp-json\/wp\/v2\/tags?post=1400"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}