[SPIKE] Twitter and Facebook connector do duplicate postings


For this spike: find out what te best way is to queue the Twitter and Facebook messages. Also queue the item when the article is not published yet and post the article when it is published. According to this is possible in the current queue.

Also find out what the best way is to store / log the postings to Twitter and Facebook so we can prevent duplicate postings.


  • Enable the IntegratedSocialBundle in the AppKernel and implement the config.yml

  • Add an ID field to 'channel_connector_config' and refactor the connector controller to make use of the ID

  • In 'Integrated\Bundle\SocialBundle\Connector\Facebook\Exporter::export' check if is posted before.

    • Check the response code from '$this->facebook->post()' if the request was successful

    • In 'vendor/integrated/integrated/src/Common/Channel/Exporter/Exporter.php' change the function getExporter(OptionsInterface $options) to getExporter(ConfigInterface $config); (change all methods, including the interface). Need the connector ID to check if posted before.

    • Save the state per connector config to the metadata if not posted before (to support multiple connector configs).

    • The connector should return the remote post ID (or other data) and should not save the data. The data should be saved in the QueueExporter class. The data will be an array.

    • Save the Facebook/Twitter post ID to remove the article later (if the article is not published anymore)

  • To support delayed published times, use the 'time_execute' in the queue and check in the exporter if an document is published.

  • Also do the same for the 'Integrated\Bundle\SocialBundle\Connector\Twitter\Exporter'

NOTE: the exporter should not save any data to the document.

Deployment actions


Technical tasks



Marijn Otte
June 21, 2018, 7:43 AM

I think the meaning of the state in the queue is limited. For the "add" state: when processing the queue the item can be unpublished, future published, past published or deleted. You have to check that again. For the "delete" state: when processing the queue the item can be published. So in both situations the state might be switched. So the also advantage is that you can ignore a switching state instead of doing the export duplicate. But this will be addressed in another issue.

Jeroen van Leeuwen
June 21, 2018, 6:43 AM

thanks for the answers!
I think we misunderstood about "can we queue unpublished articles the jobs in the current setup": I meant that articles that are published in the future will be published by the connector no matter the publication date.

According to it is possible to store these item in the queue with an execution time. But I am not sure if this is supported in the current setup (needs to be checked).

And also some new question about the changes to the spike:
Where and how are we going to save the metadata now?
And I should indeed not remove the state from the queue.

Ger Jan van den Bosch
June 20, 2018, 12:32 PM

  • Changed

  • I do not understand what you mean with 'time_execute' (this should be another issue?)

  • If the article is not published anymore, a new delete queue item is added.

    • Support the delete state in: Integrated\Bundle\SocialBundle\Connector\Facebook\Exporter

Marijn Otte
June 19, 2018, 2:41 PM

Another question / remark:
The state seems to be added to the queue message, which is not OK, because the state can change after adding it to the queue. So I think that need to be changed as well.

Marijn Otte
June 19, 2018, 2:15 PM

Few questions / remarks:

  • The connector will have to store the ID of the remote Twitter post. In the future we also want to have the URL of the remote post. And maybe some other data that is different per connector. My preference would be the connector returns the data and Integrated is responsible for saving it and providing it back to the connector. I think individual connectors should not save to the metadata, to keep it easy to create one and to allow Integrated to use the information as well (in the future the URL will be important). I'm not sure if the provided solution works this way or different.

  • Running the connectors is already queued and runs in the background. The queue also has a time_execute field to queue a run in the future. (I'm not 100% sure if its implemented in the queue runner as well)





Integrated Marijn

Epic Link



Fix versions