Add possibility to record video of functional testing and to run all functional testing on tags
>>> [!note] Migrated issue
<!-- Drupal.org comment -->
<!-- Migrated from issue #3577469. -->
Reported by: [marcus_johansson](https://www.drupal.org/user/385947)
Related to !1292
>>>
<p>[Tracker]<br>
<strong>Update Summary: </strong>[One-line status update for stakeholders]<br>
<strong>Short Description: </strong>[One-line issue summary for stakeholders]<br>
<strong>Check-in Date: </strong>MM/DD/YYYY<br>
<em>Metadata is used by the <a href="https://www.drupalstarforge.ai/" title="AI Tracker">AI Tracker.</a> Docs and additional fields <a href="https://www.drupalstarforge.ai/ai-dashboard/docs" title="AI Issue Tracker Documentation">here</a>.</em><br>
[/Tracker]</p>
<h3 id="summary-problem-motivation">Problem/Motivation</h3>
<p>Functional testing is great when it works, however there are a four main reasons I have seen historically that they don't work:</p>
<ul>
<li>Maintainability - frontends changes markup, then the functional tests needs to change.</li>
<li>Slow to write - its takes time to write an test. Its also harder to setup locally compared to other tests.</li>
<li>Hard to validate failures - without visual queues it might be hard based on a step to see why it failed.</li>
<li>Slow to execute - they are doing real life browser testing, so they just take time to run.</li>
<li>Also with AI you need a way to mock a provider response, but we have solved this already in the testing module.</li>
</ul>
<p>I think code agents takes all the edge out of the two first problems and for the other two the idea is to:</p>
<ul>
<li>Make sure to record videos of each test - this makes it possible to see when and how something fails.</li>
<li>Group the tests with tags - one for the issue number, so it runs that specific test in the issue and one for when its getting tagged.</li>
</ul>
<p>When we tag something we are not in a hurry, meaning that if the tests needs to run for an hour or two, then this is ok.</p>
<p>I have had success with Behat and this in previous projects, and with AI this gets even easier.</p>
<h3 id="summary-proposed-resolution">Proposed resolution</h3>
<ul>
<li>Check so the hardware of the runners is enough. It usual that they run on server CPU's that lacks an onboard video encoder. We might also need to play around with setting a swap memory, if the machines are under 2GB ram.</li>
<li>If its possible, make it possible to run the tests with a video recording. The recorded videos name should be that of the functions name.</li>
<li>Change the gitlab, so only tests tagged with the current issue are run generally and otherwise only when tagging.</li>
</ul>
<h3 id="summary-remaining-tasks">Remaining tasks</h3>
<h3>Optional: Other details as applicable (e.g., User interface changes, API changes, Data model changes)</h3>
<h3 id="summary-ai-usage">AI usage (if applicable)</h3>
<p>[ ] AI Assisted Issue<br>
This issue was generated with AI assistance, but was reviewed and refined by the creator.</p>
<p>[ ] AI Assisted Code<br>
This code was mainly generated by a human, with AI autocompleting or parts AI generated, but under full human supervision.</p>
<p>[ ] AI Generated Code<br>
This code was mainly generated by an AI with human guidance, and reviewed, tested, and refined by a human.</p>
<p>[ ] Vibe Coded<br>
This code was generated by an AI and has only been functionally tested.</p>
issue