Dear partners,
I'm sending here the links for three impact surveys we are supposed to send out and have filled in during these final months until the end of the project.
They relate to three different target groups:
1. SMEs https://foss4smes.limequery.com/1?lang=en 2. VET Centres, Trainers and Coaches7
https://foss4smes.limequery.com/2?lang=en
3. Project partners (i.e. ourselves)
https://foss4smes.limequery.com/3?lang=en
The fourth target group to be surveyed will be "Other stakeholders" (e.g. policy makers in digital education). They will be part of a "formal consultation based on Intellectual Output 3", which is still under development.
However, this fourth group will be most probably approached exploiting the occasion of the final conference in Brussels.
In addition, please find attached a template for the collection of SMEs case studies showcasing the participants experience and improved performance. We are supposed to collect 5 case studies per partner, 30 in total.
These activities relate to the "Impact" strategy described at page 62-63 of the proposal.
I have started updating the Quality Plan accordingly with all the necessary information (you will find it in keybase), and it will be finalised as soon as we are done also with the 4th target group and the self-diagnostic tool (which is supposed to depict the "before" situation about participants).
Please have a look at the surveys and we will discuss them during our monthly call coming next Tuesday.
Wish you a nice weekend,
Francesco Agresta
European Project Manager
European Digital Learning Network
Via Domenico Scarlatti, 30
20124 Milano
Mob. +39 3496027623
Email mailto:francesco.agresta@dlearn.eu francesco.agresta@dlearn.eu
http://www.dlearn.eu/ www.dlearn.eu
Dear Francesco,
We have checked the three surveys and notice that the text for questions are right justified, and we think they should be left justified. Several questions have formatting issues which may significantly inhibit respondents from filling in the survey.
We don't understand how the questions on the survey page "FOSS4SMEs Impact" relate to impact.
In general, few questions actually address the purpose of the survey, as stated on the first page of the survey(s): "This survey has been developed to assess the impact of the FOSS4SMEs main outputs on participating SMEs.". Most questions are at the level of the individual experiences rather than at the organisational (SME) level.
Further, we fear that the large number of questions may reduce the response rate.
We also find that there is a significant risk for low response rate, in particular for SMEs, when using an online survey tool. Hence, it would be appropriate to also provide an offline alternative (e.g. in the form of an ODS template that is provided to potential respondents so that respondents can fill it in, print it, and sent it back via post (landmail/airmail) to address privacy concerns).
Best Jonas & Björn
On 2019-06-14 18:44, francesco.agresta@dlearn.eu wrote:
Dear partners,
I’m sending here the links for three impact surveys we are supposed to send out and have filled in during these final months until the end of the project.
They relate to three different target groups:
- SMEs https://foss4smes.limequery.com/1?lang=en
- VET Centres, Trainers and Coaches7
https://foss4smes.limequery.com/2?lang=en
- Project partners (i.e. ourselves)
https://foss4smes.limequery.com/3?lang=en
The fourth target group to be surveyed will be “Other stakeholders” (e.g. policy makers in digital education). They will be part of a “formal consultation based on Intellectual Output 3”, which is still under development.
However, this fourth group will be most probably approached exploiting the occasion of the final conference in Brussels.
In addition, please find attached a template for the collection of SMEs case studies showcasing the participants experience and improved performance. We are supposed to collect 5 case studies per partner, 30 in total.
These activities relate to the “Impact” strategy described at page 62-63 of the proposal.
I have started updating the Quality Plan accordingly with all the necessary information (you will find it in keybase), and it will be finalised as soon as we are done also with the 4^th target group and the self-diagnostic tool (which is supposed to depict the “before” situation about participants).
Please have a look at the surveys and we will discuss them during our monthly call coming next Tuesday.
Wish you a nice weekend,
Descrizione: Descrizione: dlearn http://www.dlearn.eu/
Francesco Agresta
European Project Manager
European Digital Learning Network
Via Domenico Scarlatti, 30
20124 Milano
Mob. +39 3496027623
Email francesco.agresta@dlearn.eu mailto:francesco.agresta@dlearn.eu
www.dlearn.eu http://www.dlearn.eu/
Foss4smes-team mailing list Foss4smes-team@lists.fsfe.org https://lists.fsfe.org/mailman/listinfo/foss4smes-team
This mailing list is covered by the FSFE's Code of Conduct. All participants are kindly asked to be excellent to each other: https://fsfe.org/about/codeofconduct
--
Dear Jonas and Bjorn, thank you for your prompt and valuable feedback.
Il 18 giugno 2019 alle 17.33 Jonas Gamalielsson jonas.gamalielsson@his.se ha scritto:
We have checked the three surveys and notice that the text for questions are right justified, and we think they should be left justified. Several questions have formatting issues which may significantly inhibit respondents from filling in the survey.
I see, I didn't select a right justification so I guess that the Limesurvey system did it automatically. I have started to use it only recently, so I can try to fix it.
We don't understand how the questions on the survey page "FOSS4SMEs Impact" relate to impact.
In general, few questions actually address the purpose of the survey, as stated on the first page of the survey(s): "This survey has been developed to assess the impact of the FOSS4SMEs main outputs on participating SMEs.". Most questions are at the level of the individual experiences rather than at the organisational (SME) level.
I get your point, which is all the way reasonable. However, I believe that we have to keep in mind the scope of this project and our present status. Do you believe it would be possible to report on an impact at organisational level over the remaining implementation time, also given the fact that the course has still not be released? I can't see this happening right now, but please give me input if I'm wrong. This is why I thought the easiest way to get away with this task would be to keep an individual approach.
Further, we fear that the large number of questions may reduce the response rate.
I could take out some of the questions in the two matrixes, but I'm afraid the free-text questions at the end of the survey have to stay because they are specifically asked in the proposal.
We also find that there is a significant risk for low response rate, in particular for SMEs, when using an online survey tool. Hence, it would be appropriate to also provide an offline alternative (e.g. in the form of an ODS template that is provided to potential respondents so that respondents can fill it in, print it, and sent it back via post (landmail/airmail) to address privacy concerns).
I developed an online survey because we thought with Katerina it would have made easier the dissemination of the questionnaire and the collection/analysis of responses. However, if you think you'll need an .odt version of it for the SMEs, that's not going to be an issue.
Best, Francesco
On 2019-06-14 18:44, francesco.agresta@dlearn.eu wrote:
Dear partners,
I’m sending here the links for three impact surveys we are supposed to send out and have filled in during these final months until the end of the project.
They relate to three different target groups:
- SMEs https://foss4smes.limequery.com/1?lang=en
- VET Centres, Trainers and Coaches7
https://foss4smes.limequery.com/2?lang=en
- Project partners (i.e. ourselves)
https://foss4smes.limequery.com/3?lang=en
The fourth target group to be surveyed will be “Other stakeholders” (e.g. policy makers in digital education). They will be part of a “formal consultation based on Intellectual Output 3”, which is still under development.
However, this fourth group will be most probably approached exploiting the occasion of the final conference in Brussels.
In addition, please find attached a template for the collection of SMEs case studies showcasing the participants experience and improved performance. We are supposed to collect 5 case studies per partner, 30 in total.
These activities relate to the “Impact” strategy described at page 62-63 of the proposal.
I have started updating the Quality Plan accordingly with all the necessary information (you will find it in keybase), and it will be finalised as soon as we are done also with the 4^th target group and the self-diagnostic tool (which is supposed to depict the “before” situation about participants).
Please have a look at the surveys and we will discuss them during our monthly call coming next Tuesday.
Wish you a nice weekend,
Descrizione: Descrizione: dlearn http://www.dlearn.eu/
Francesco Agresta
European Project Manager
European Digital Learning Network
Via Domenico Scarlatti, 30
20124 Milano
Mob. +39 3496027623
Email francesco.agresta@dlearn.eu mailto:francesco.agresta@dlearn.eu
www.dlearn.eu http://www.dlearn.eu/
Foss4smes-team mailing list Foss4smes-team@lists.fsfe.org https://lists.fsfe.org/mailman/listinfo/foss4smes-team
This mailing list is covered by the FSFE's Code of Conduct. All participants are kindly asked to be excellent to each other: https://fsfe.org/about/codeofconduct
--
Hi Francesco, I agree with Jonas and Björn that the number of questions could become an issue to for the response rate. Specifically, in the matrix, there is some potential to remove questions that look at the same aspect from a different angle. e.g.: "The learning platform was easy to use" and "I found the learning platform unnecessarily complex"-> a similar and could be one question "I felt very confident using the platform"-> not sure if the confidence of the reader is relevant for us There are also some formatting issues where there seems automated hyphenation (line breaks within one word) happening which does not help ease of reading. I don't know if it will be possible, but maybe you can also reduce the questions for the different units of the course? A help could also be to add the number of the unit so it's a bit clearer. Best,Sivan On Tue, 2019-06-18 at 18:15 +0200, Francesco Agresta wrote:
Dear Jonas and Bjorn, thank you for your prompt and valuable feedback.
Il 18 giugno 2019 alle 17.33 Jonas Gamalielsson < jonas.gamalielsson@his.se> ha scritto: We have checked the three surveys and notice that the text for questions are right justified, and we think they should be left justified. Several questions have formatting issues which may significantly inhibit respondents from filling in the survey.
I see, I didn't select a right justification so I guess that the Limesurvey system did it automatically. I have started to use it only recently, so I can try to fix it.
We don't understand how the questions on the survey page "FOSS4SMEs Impact" relate to impact. In general, few questions actually address the purpose of the survey, as stated on the first page of the survey(s): "This survey has been developed to assess the impact of the FOSS4SMEs main outputs on participating SMEs.". Most questions are at the level of the individual experiences rather than at the organisational (SME) level.
I get your point, which is all the way reasonable. However, I believe that we have to keep in mind the scope of this project and our present status. Do you believe it would be possible to report on an impact at organisational level over the remaining implementation time, also given the fact that the course has still not be released? I can't see this happening right now, but please give me input if I'm wrong. This is why I thought the easiest way to get away with this task would be to keep an individual approach.
Further, we fear that the large number of questions may reduce the response rate.
I could take out some of the questions in the two matrixes, but I'm afraid the free-text questions at the end of the survey have to stay because they are specifically asked in the proposal.
We also find that there is a significant risk for low response rate, in particular for SMEs, when using an online survey tool. Hence, it would be appropriate to also provide an offline alternative (e.g. in the form of an ODS template that is provided to potential respondents so that respondents can fill it in, print it, and sent it back via post (landmail/airmail) to address privacy concerns).
I developed an online survey because we thought with Katerina it would have made easier the dissemination of the questionnaire and the collection/analysis of responses. However, if you think you'll need an .odt version of it for the SMEs, that's not going to be an issue. Best, Francesco
On 2019-06-14 18:44, francesco.agresta@dlearn.eu wrote:
Dear partners, I’m sending here the links for three impact surveys we are supposed to send out and have filled in during these final months until the end of the project. They relate to three different target groups:
- SMEs https://foss4smes.limequery.com/1?lang=en
- VET Centres, Trainers and Coaches7
https://foss4smes.limequery.com/2?lang=en
- Project partners (i.e. ourselves)
https://foss4smes.limequery.com/3?lang=en
The fourth target group to be surveyed will be “Other stakeholders” (e.g. policy makers in digital education). They will be part of a “formal consultation based on Intellectual Output 3”, which is still under development. However, this fourth group will be most probably approached exploiting the occasion of the final conference in Brussels. In addition, please find attached a template for the collection of SMEs case studies showcasing the participants experience and improved performance. We are supposed to collect 5 case studies per partner, 30 in total. These activities relate to the “Impact” strategy described at page 62-63 of the proposal. I have started updating the Quality Plan accordingly with all the necessary information (you will find it in keybase), and it will be finalised as soon as we are done also with the 4^th target group and the self-diagnostic tool (which is supposed to depict the “before” situation about participants). Please have a look at the surveys and we will discuss them during our monthly call coming next Tuesday. Wish you a nice weekend, Descrizione: Descrizione: dlearn http://www.dlearn.eu/ Francesco Agresta European Project Manager European Digital Learning Network Via Domenico Scarlatti, 30 20124 Milano Mob. +39 3496027623 Email francesco.agresta@dlearn.eu mailto: francesco.agresta@dlearn.eu www.dlearn.eu http://www.dlearn.eu/
_______________________________________________Foss4smes-team mailing listFoss4smes-team@lists.fsfe.org https://lists.fsfe.org/mailman/listinfo/foss4smes-team
This mailing list is covered by the FSFE's Code of Conduct. Allparticipants are kindly asked to be excellent to each other: https://fsfe.org/about/codeofconduct
--
_______________________________________________Foss4smes-team mailing listFoss4smes-team@lists.fsfe.org https://lists.fsfe.org/mailman/listinfo/foss4smes-team
This mailing list is covered by the FSFE's Code of Conduct. Allparticipants are kindly asked to be excellent to each other: https://fsfe.org/about/codeofconduct
Dear all,
If I get it right, these questions come from SUS (eg. https://www.usability.gov/how-to-and-tools/methods/system-usability-scale.ht...) and, thus, should be used as they are, if we want to derive a result on the usability, based on the scale of this methodology. The intention is for the same issue to be addressed/asked in both a “positive” and “negative” way, in order to make sure that the user has a clear understanding and does not respond mechanically. Again, I am not that deep in the project, you know best.
Best regards,
Ifigeneia
From: Foss4smes-team [mailto:foss4smes-team-bounces@lists.fsfe.org] On Behalf Of Sivan Patsch Sent: Thursday, June 20, 2019 4:32 PM To: Francesco Agresta; Jonas Gamalielsson Cc: foss4smes-team@lists.fsfe.org Subject: Re: [FOSS4SMEs-team] FOSS4SMEs_ Impact surveys
Hi Francesco,
I agree with Jonas and Björn that the number of questions could become an issue to for the response rate. Specifically, in the matrix, there is some potential to remove questions that look at the same aspect from a different angle.
e.g.:
"The learning platform was easy to use" and "I found the learning platform unnecessarily complex"
-> a similar and could be one question
"I felt very confident using the platform"
-> not sure if the confidence of the reader is relevant for us
There are also some formatting issues where there seems automated hyphenation (line breaks within one word) happening which does not help ease of reading.
I don't know if it will be possible, but maybe you can also reduce the questions for the different units of the course? A help could also be to add the number of the unit so it's a bit clearer.
Best,
Sivan
On Tue, 2019-06-18 at 18:15 +0200, Francesco Agresta wrote:
Dear Jonas and Bjorn, thank you for your prompt and valuable feedback.
Il 18 giugno 2019 alle 17.33 Jonas Gamalielsson <
mailto:jonas.gamalielsson@his.se
jonas.gamalielsson@his.se mailto:jonas.gamalielsson@his.se
ha scritto:
We have checked the three surveys and notice that the text for questions are right justified, and we think they should be left justified. Several questions have formatting issues which may significantly inhibit respondents from filling in the survey.
I see, I didn't select a right justification so I guess that the Limesurvey system did it automatically. I have started to use it only recently, so I can try to fix it.
We don't understand how the questions on the survey page "FOSS4SMEs Impact" relate to impact.
In general, few questions actually address the purpose of the survey, as stated on the first page of the survey(s): "This survey has been developed to assess the impact of the FOSS4SMEs main outputs on participating SMEs.". Most questions are at the level of the individual experiences rather than at the organisational (SME) level.
I get your point, which is all the way reasonable. However, I believe that we have to keep in mind the scope of this project and our present status. Do you believe it would be possible to report on an impact at organisational level over the remaining implementation time, also given the fact that the course has still not be released? I can't see this happening right now, but please give me input if I'm wrong. This is why I thought the easiest way to get away with this task would be to keep an individual approach.
Further, we fear that the large number of questions may reduce the response rate.
I could take out some of the questions in the two matrixes, but I'm afraid the free-text questions at the end of the survey have to stay because they are specifically asked in the proposal.
We also find that there is a significant risk for low response rate, in particular for SMEs, when using an online survey tool. Hence, it would be appropriate to also provide an offline alternative (e.g. in the form of an ODS template that is provided to potential respondents so that respondents can fill it in, print it, and sent it back via post (landmail/airmail) to address privacy concerns).
I developed an online survey because we thought with Katerina it would have made easier the dissemination of the questionnaire and the collection/analysis of responses. However, if you think you'll need an .odt version of it for the SMEs, that's not going to be an issue.
Best, Francesco
On 2019-06-14 18:44,
mailto:francesco.agresta@dlearn.eu
francesco.agresta@dlearn.eu mailto:francesco.agresta@dlearn.eu
wrote:
Dear partners,
I’m sending here the links for three impact surveys we are supposed to send out and have filled in during these final months until the end of the project.
They relate to three different target groups:
1. SMEs
https://foss4smes.limequery.com/1?lang=en
https://foss4smes.limequery.com/1?lang=en
2. VET Centres, Trainers and Coaches7
https://foss4smes.limequery.com/2?lang=en
https://foss4smes.limequery.com/2?lang=en
3. Project partners (i.e. ourselves)
https://foss4smes.limequery.com/3?lang=en
https://foss4smes.limequery.com/3?lang=en
The fourth target group to be surveyed will be “Other stakeholders” (e.g. policy makers in digital education). They will be part of a “formal consultation based on Intellectual Output 3”, which is still under development.
However, this fourth group will be most probably approached exploiting the occasion of the final conference in Brussels.
In addition, please find attached a template for the collection of SMEs case studies showcasing the participants experience and improved performance. We are supposed to collect 5 case studies per partner, 30 in total.
These activities relate to the “Impact” strategy described at page 62-63 of the proposal.
I have started updating the Quality Plan accordingly with all the necessary information (you will find it in keybase), and it will be finalised as soon as we are done also with the 4^th target group and the self-diagnostic tool (which is supposed to depict the “before” situation about participants).
Please have a look at the surveys and we will discuss them during our monthly call coming next Tuesday.
Wish you a nice weekend,
Descrizione: Descrizione: dlearn <
Francesco Agresta
European Project Manager
European Digital Learning Network
Via Domenico Scarlatti, 30
20124 Milano
Mob. +39 3496027623
mailto:francesco.agresta@dlearn.eu
francesco.agresta@dlearn.eu mailto:francesco.agresta@dlearn.eu
<mailto:
mailto:francesco.agresta@dlearn.eu
francesco.agresta@dlearn.eu mailto:francesco.agresta@dlearn.eu
www.dlearn.eu http://www.dlearn.eu
<
_______________________________________________ Foss4smes-team mailing list
mailto:Foss4smes-team@lists.fsfe.org
Foss4smes-team@lists.fsfe.org mailto:Foss4smes-team@lists.fsfe.org
https://lists.fsfe.org/mailman/listinfo/foss4smes-team
https://lists.fsfe.org/mailman/listinfo/foss4smes-team
This mailing list is covered by the FSFE's Code of Conduct. All participants are kindly asked to be excellent to each other:
https://fsfe.org/about/codeofconduct
https://fsfe.org/about/codeofconduct
--
_______________________________________________ Foss4smes-team mailing list
mailto:Foss4smes-team@lists.fsfe.org
Foss4smes-team@lists.fsfe.org mailto:Foss4smes-team@lists.fsfe.org
https://lists.fsfe.org/mailman/listinfo/foss4smes-team
https://lists.fsfe.org/mailman/listinfo/foss4smes-team
This mailing list is covered by the FSFE's Code of Conduct. All participants are kindly asked to be excellent to each other:
https://fsfe.org/about/codeofconduct
https://fsfe.org/about/codeofconduct
Dear Ifigenia, Sivan and all,
to be honest I wasn’t aware of the SUS label, but the concept is right.
That set of questions come from the need to create a “participant survey” as required by the proposal, which could also serve to evaluate the overall platform during piloting and testing.
However, there are currently 15 questions in the matrix, so if everyone agrees I could cut them to 10 to make it simpler and adapt to the SUS methodology.
Another option might be to remove this matrix from the surveys addressed externally (i.e. SMEs and VET) and leave it only for our internal testing (i.e. the third survey dedicated to project partners).
Reduction of questions in the second matrix + indication of the unit number -> yes, this could be easily adjusted too.
On a more practical side, I have just found out that unfortunately the free version of LimeSurvey allows us only 25 responses per survey created. I’m sorry, I wasn’t aware of that.
Do someone of you (maybe the partners more expert in the FOSS world) have any advice on other free and open source tool to create online surveys that could be fit for our purpose?
Thank you,
Francesco Agresta
European Project Manager
European Digital Learning Network
Via Domenico Scarlatti, 30
20124 Milano
Mob. +39 3496027623
Email mailto:francesco.agresta@dlearn.eu francesco.agresta@dlearn.eu
http://www.dlearn.eu/ www.dlearn.eu
Da: Ifigeneia Metaxa metaxa@abe.gr Inviato: giovedì 20 giugno 2019 15:46 A: 'Sivan Pätsch' sivan@openforumeurope.org; 'Francesco Agresta' francesco.agresta@dlearn.eu; 'Jonas Gamalielsson' jonas.gamalielsson@his.se Cc: foss4smes-team@lists.fsfe.org Oggetto: RE: [FOSS4SMEs-team] FOSS4SMEs_ Impact surveys
Dear all,
If I get it right, these questions come from SUS (eg. https://www.usability.gov/how-to-and-tools/methods/system-usability-scale.ht...) and, thus, should be used as they are, if we want to derive a result on the usability, based on the scale of this methodology. The intention is for the same issue to be addressed/asked in both a “positive” and “negative” way, in order to make sure that the user has a clear understanding and does not respond mechanically. Again, I am not that deep in the project, you know best.
Best regards,
Ifigeneia
From: Foss4smes-team [mailto:foss4smes-team-bounces@lists.fsfe.org] On Behalf Of Sivan Patsch Sent: Thursday, June 20, 2019 4:32 PM To: Francesco Agresta; Jonas Gamalielsson Cc: foss4smes-team@lists.fsfe.org mailto:foss4smes-team@lists.fsfe.org Subject: Re: [FOSS4SMEs-team] FOSS4SMEs_ Impact surveys
Hi Francesco,
I agree with Jonas and Björn that the number of questions could become an issue to for the response rate. Specifically, in the matrix, there is some potential to remove questions that look at the same aspect from a different angle.
e.g.:
"The learning platform was easy to use" and "I found the learning platform unnecessarily complex"
-> a similar and could be one question
"I felt very confident using the platform"
-> not sure if the confidence of the reader is relevant for us
There are also some formatting issues where there seems automated hyphenation (line breaks within one word) happening which does not help ease of reading.
I don't know if it will be possible, but maybe you can also reduce the questions for the different units of the course? A help could also be to add the number of the unit so it's a bit clearer.
Best,
Sivan
On Tue, 2019-06-18 at 18:15 +0200, Francesco Agresta wrote:
Dear Jonas and Bjorn, thank you for your prompt and valuable feedback.
Il 18 giugno 2019 alle 17.33 Jonas Gamalielsson < jonas.gamalielsson@his.se mailto:jonas.gamalielsson@his.se
ha scritto:
We have checked the three surveys and notice that the text for questions are right justified, and we think they should be left justified. Several questions have formatting issues which may significantly inhibit respondents from filling in the survey.
I see, I didn't select a right justification so I guess that the Limesurvey system did it automatically. I have started to use it only recently, so I can try to fix it.
We don't understand how the questions on the survey page "FOSS4SMEs Impact" relate to impact.
In general, few questions actually address the purpose of the survey, as stated on the first page of the survey(s): "This survey has been developed to assess the impact of the FOSS4SMEs main outputs on participating SMEs.". Most questions are at the level of the individual experiences rather than at the organisational (SME) level.
I get your point, which is all the way reasonable. However, I believe that we have to keep in mind the scope of this project and our present status. Do you believe it would be possible to report on an impact at organisational level over the remaining implementation time, also given the fact that the course has still not be released? I can't see this happening right now, but please give me input if I'm wrong. This is why I thought the easiest way to get away with this task would be to keep an individual approach.
Further, we fear that the large number of questions may reduce the response rate.
I could take out some of the questions in the two matrixes, but I'm afraid the free-text questions at the end of the survey have to stay because they are specifically asked in the proposal.
We also find that there is a significant risk for low response rate, in particular for SMEs, when using an online survey tool. Hence, it would be appropriate to also provide an offline alternative (e.g. in the form of an ODS template that is provided to potential respondents so that respondents can fill it in, print it, and sent it back via post (landmail/airmail) to address privacy concerns).
I developed an online survey because we thought with Katerina it would have made easier the dissemination of the questionnaire and the collection/analysis of responses. However, if you think you'll need an .odt version of it for the SMEs, that's not going to be an issue.
Best, Francesco
On 2019-06-14 18:44, francesco.agresta@dlearn.eu mailto:francesco.agresta@dlearn.eu
wrote:
Dear partners,
I’m sending here the links for three impact surveys we are supposed to send out and have filled in during these final months until the end of the project.
They relate to three different target groups:
1. SMEs
https://foss4smes.limequery.com/1?lang=en
2. VET Centres, Trainers and Coaches7
https://foss4smes.limequery.com/2?lang=en
3. Project partners (i.e. ourselves)
https://foss4smes.limequery.com/3?lang=en
The fourth target group to be surveyed will be “Other stakeholders” (e.g. policy makers in digital education). They will be part of a “formal consultation based on Intellectual Output 3”, which is still under development.
However, this fourth group will be most probably approached exploiting the occasion of the final conference in Brussels.
In addition, please find attached a template for the collection of SMEs case studies showcasing the participants experience and improved performance. We are supposed to collect 5 case studies per partner, 30 in total.
These activities relate to the “Impact” strategy described at page 62-63 of the proposal.
I have started updating the Quality Plan accordingly with all the necessary information (you will find it in keybase), and it will be finalised as soon as we are done also with the 4^th target group and the self-diagnostic tool (which is supposed to depict the “before” situation about participants).
Please have a look at the surveys and we will discuss them during our monthly call coming next Tuesday.
Wish you a nice weekend,
Descrizione: Descrizione: dlearn < http://www.dlearn.eu/
Francesco Agresta
European Project Manager
European Digital Learning Network
Via Domenico Scarlatti, 30
20124 Milano
Mob. +39 3496027623
Email francesco.agresta@dlearn.eu mailto:francesco.agresta@dlearn.eu
<mailto: francesco.agresta@dlearn.eu mailto:francesco.agresta@dlearn.eu
www.dlearn.eu http://www.dlearn.eu
_______________________________________________ Foss4smes-team mailing list Foss4smes-team@lists.fsfe.org mailto:Foss4smes-team@lists.fsfe.org
https://lists.fsfe.org/mailman/listinfo/foss4smes-team
This mailing list is covered by the FSFE's Code of Conduct. All participants are kindly asked to be excellent to each other: https://fsfe.org/about/codeofconduct
--
_______________________________________________ Foss4smes-team mailing list Foss4smes-team@lists.fsfe.org mailto:Foss4smes-team@lists.fsfe.org
https://lists.fsfe.org/mailman/listinfo/foss4smes-team
This mailing list is covered by the FSFE's Code of Conduct. All participants are kindly asked to be excellent to each other: https://fsfe.org/about/codeofconduct
Dear all,
its nice to receive detailed feedback from SKUNI, OFE and Ifigeneia. I will now add my own feedback plus some old notes I have from Ifigeneia on this issue for consideration.
- How can we track from which countries are the participants of the surveys? We need to prove certain numbers from each country. - How can we stay in contact with the cooperating respondents to fulfil the before and after surveys with the reflection statement that is required? Is it better to send these per e-mail? We need guidelines. - Why did Franc. create the “participant survey” so as to serve the evaluation of the overall platform during piloting and testing? This is being done with O2/A4 and there are certain tools prepared by SKUNI for this purpose. Be careful not to mix those two different things. - If only 25 responses per survey are allowed in Limesurvey, we need to create the surveys for each country separately. - Apart from the “2 Participant Impact surveys” there are another 8 tools we need to use as a project team. Which is the relation of this survey to the rest of the tools? Has Dlearn made sure that there are no overlapps with the other tools? - Dlearn should be careful not to mix these surveys with the surveys of Brian called “self-diagnostic tool” and “final participant evaluation survey”, which will be soon integrated in Moodle and are focused only on the training course and its lessons/knowledge acquired there. Dlearns surveys focus on the impact of all the results produced within this project. At least this is how I personally understand it. - Further, since we need f.e. “written evidence on concrete plans”, a “data metric document” etc., we need from Dlearn to clarify in which order and in which project-time we need to use each impact tool. This is to avoid confusion and questions along the finalisation of the project. - Concerning the number of words in your "free text questions", Ifigeneia suggested to reduce it from 2000 to 500 if possible.
She also suggested (older notes I kept):
- to ask the questions in a smart way, in order to get the answers, we need;
- to have less free text questions;
- to remember that we will need to use the results/information give inside the final report as a proof of our work;
- tool nr.4 “written evidence of concrete plans” should be developed as a question inside the “after survey”, so that we cover it this way;
- tool nr.5 “expressions of interest” should be developed as a question inside the “after survey”, so that we cover it this way;
- tool nr.8 “Peer Review through staff” concerning the partners, each of us should write 5 lines on where they will use the project results afterwards. The stakeholders can do it in our Multiplier event in Brussels.
- tool nr. 2 “performance data metric document before and after”, which is for SMEs, she suggested to develop it as a questionnaire in limesurvey. Either extra or inside the “Participant Impact surveys”.
Concerning the numbers, the proposal gives numbers also for other categories of impact measurement or target group. They are just not in these 2 pages and one has to look for them elsewere. Ifigeneia and me have long thought and discussed these numbers and have already suggested to Dlearn the following numbers per partner:
Tool nr. 1: 15
Tool nr. 2: 15
Tool nr. 3: 5
Tool nr. 4: as many as possible
Tool nr. 5: as many as possible
Tool nr. 6: 40
Tool nr. 7: 30+
Since the rest of the team partners don’t know ATLs discussion with Dlearn on this so far, here I explain the numbering:
Tool nr. 1: Participant survey of 15 SMEs and VET trainers (before and after) including a reflection statement
Tool nr. 2: Completion of performance data/metric document by 15 SMEs (before and after)
Tool nr. 3: A set of 5 SME case studies from each partner (total 30) showcasing the participants experience and improved performance
Tool nr. 4: Written evidence of concrete plans and/or actual examples of new SMEs and VET trainers using the FOSS4SMEs platform by participants
Tool nr. 5: Number of ‘expressions of interest’/requests to use the FOSS4SMEs platform by other SMEs, VET trainers, Stakeholders and partners.
Tool nr. 6: a regional/national database of stakeholders and key contacts (stakeholders matrix) with at least 40 regional and national stakeholders with responsibility for VET/SME policy and development.
Tool nr. 7: A formal consultation exercise involving 30+ national and European policy-makers/key stakeholders based on the policy recommendations (O3) .
Tool nr. 8: A peer review, for which partners will nominate a staff member not directly involved in pilot activities to complete a questionnaire to provide feedback and comments on the projects tangible and intangible outcomes.
Tool nr. 9: A persuasive business case of how to make strategic use of FOSS and the use of open educational resources within VET – Can be prepared inside the new chapter of the updated Quality Plan.
The coordinators suggestion – agreed with Dlearn- is to try to reach these numbers as good as we can. If we reach them, we can be optimistic to have a good evaluation during our Final Reporting period. Of course we can make a collective decision, after we receive feedback from TUD and FSFE (deadline was set for 21.06.).
Dear Francesco, when can we have the updated chapter inside the Quality plan and the tools ready? Is it possible *by 28.06?*
Best, Katerina
Στις Πέμ, 20 Ιουν 2019 στις 6:30 μ.μ., ο/η francesco.agresta@dlearn.eu έγραψε:
Hi Sivan,
you are right, we have no numerical targets for the other categories.
The idea to put a number on them came out from the discussion between me and Katerina, because we thought it would have been better for everyone to have a reference target in order to try and present the same amount of results across the different countries.
I tried to keep the numbers as low as possible to avoid overclaiming, given the time we have left. But, again, this is for us to be decided, so let’s hear everyone’s opinion and make a collective decision.
Best,
[image: Descrizione: Descrizione: dlearn] http://www.dlearn.eu/
Francesco Agresta
European Project Manager
European Digital Learning Network
Via Domenico Scarlatti, 30
20124 Milano
Mob. +39 3496027623
Email francesco.agresta@dlearn.eu
www.dlearn.eu
*Da:* Sivan Pätsch sivan@openforumeurope.org *Inviato:* giovedì 20 giugno 2019 16:17 *A:* francesco.agresta@dlearn.eu; 'FOSS4SMEs mailing list' < foss4smes-team@lists.fsfe.org> *Oggetto:* Re: [FOSS4SMEs-team] Quality Plan v1.2
Hi Francesco,
Thanks for pointing to the update of the quality plan in the call on Tuesday.
I have reviewed chapter 7 on the impact according to our discussion on the call in regard to the number of representatives from the different target groups we want to receive input on the impact measurement.
I see that the application (p 62) points to five case studies per partner for the target group SMEs, but makes no prescription for any other category of impact measurement or target group. Is it therefore necessary to commit to a specific number for the other measurements and target groups? Would it be possible to make no binding commitment for the other categories or at least reduce the numbers significantly, as we have not promised five inputs for the other categories/target groups?
Best,
Sivan
Στις Πέμ, 20 Ιουν 2019 στις 5:16 μ.μ., ο/η francesco.agresta@dlearn.eu έγραψε:
Dear Ifigenia, Sivan and all,
to be honest I wasn’t aware of the SUS label, but the concept is right.
That set of questions come from the need to create a “participant survey” as required by the proposal, which could also serve to evaluate the overall platform during piloting and testing.
However, there are currently 15 questions in the matrix, so if everyone agrees I could cut them to 10 to make it simpler and adapt to the SUS methodology.
Another option might be to remove this matrix from the surveys addressed externally (i.e. SMEs and VET) and leave it only for our internal testing (i.e. the third survey dedicated to project partners).
Reduction of questions in the second matrix + indication of the unit number -> yes, this could be easily adjusted too.
On a more practical side, I have just found out that unfortunately the free version of LimeSurvey allows us only 25 responses per survey created. I’m sorry, I wasn’t aware of that.
Do someone of you (maybe the partners more expert in the FOSS world) have any advice on other free and open source tool to create online surveys that could be fit for our purpose?
Thank you,
[image: Descrizione: Descrizione: dlearn] http://www.dlearn.eu/
Francesco Agresta
European Project Manager
European Digital Learning Network
Via Domenico Scarlatti, 30
20124 Milano
Mob. +39 3496027623
Email francesco.agresta@dlearn.eu
www.dlearn.eu
*Da:* Ifigeneia Metaxa metaxa@abe.gr *Inviato:* giovedì 20 giugno 2019 15:46 *A:* 'Sivan Pätsch' sivan@openforumeurope.org; 'Francesco Agresta' < francesco.agresta@dlearn.eu>; 'Jonas Gamalielsson' < jonas.gamalielsson@his.se> *Cc:* foss4smes-team@lists.fsfe.org *Oggetto:* RE: [FOSS4SMEs-team] FOSS4SMEs_ Impact surveys
Dear all,
If I get it right, these questions come from SUS (eg. https://www.usability.gov/how-to-and-tools/methods/system-usability-scale.ht...) and, thus, should be used as they are, if we want to derive a result on the usability, based on the scale of this methodology. The intention is for the same issue to be addressed/asked in both a “positive” and “negative” way, in order to make sure that the user has a clear understanding and does not respond mechanically. Again, I am not that deep in the project, you know best.
Best regards,
Ifigeneia
*From:* Foss4smes-team [mailto:foss4smes-team-bounces@lists.fsfe.org foss4smes-team-bounces@lists.fsfe.org] *On Behalf Of *Sivan Patsch *Sent:* Thursday, June 20, 2019 4:32 PM *To:* Francesco Agresta; Jonas Gamalielsson *Cc:* foss4smes-team@lists.fsfe.org *Subject:* Re: [FOSS4SMEs-team] FOSS4SMEs_ Impact surveys
Hi Francesco,
I agree with Jonas and Björn that the number of questions could become an issue to for the response rate. Specifically, in the matrix, there is some potential to remove questions that look at the same aspect from a different angle.
e.g.:
"The learning platform was easy to use" and "I found the learning platform unnecessarily complex"
-> a similar and could be one question
"I felt very confident using the platform"
-> not sure if the confidence of the reader is relevant for us
There are also some formatting issues where there seems automated hyphenation (line breaks within one word) happening which does not help ease of reading.
I don't know if it will be possible, but maybe you can also reduce the questions for the different units of the course? A help could also be to add the number of the unit so it's a bit clearer.
Best,
Sivan
On Tue, 2019-06-18 at 18:15 +0200, Francesco Agresta wrote:
Dear Jonas and Bjorn,
thank you for your prompt and valuable feedback.
Il 18 giugno 2019 alle 17.33 Jonas Gamalielsson <
*jonas.gamalielsson@his.se jonas.gamalielsson@his.se*
ha scritto:
We have checked the three surveys and notice that the text for questions
are right justified, and we think they should be left justified. Several
questions have formatting issues which may significantly inhibit
respondents from filling in the survey.
I see, I didn't select a right justification so I guess that the Limesurvey system did it automatically. I have started to use it only recently, so I can try to fix it.
We don't understand how the questions on the survey page "FOSS4SMEs
Impact" relate to impact.
In general, few questions actually address the purpose of the survey, as
stated on the first page of the survey(s): "This survey has been
developed to assess the impact of the FOSS4SMEs main outputs on
participating SMEs.". Most questions are at the level of the individual
experiences rather than at the organisational (SME) level.
I get your point, which is all the way reasonable. However, I believe that we have to keep in mind the scope of this project and our present status. Do you believe it would be possible to report on an impact at organisational level over the remaining implementation time, also given the fact that the course has still not be released?
I can't see this happening right now, but please give me input if I'm wrong. This is why I thought the easiest way to get away with this task would be to keep an individual approach.
Further, we fear that the large number of questions may reduce the
response rate.
I could take out some of the questions in the two matrixes, but I'm afraid the free-text questions at the end of the survey have to stay because they are specifically asked in the proposal.
We also find that there is a significant risk for low response rate, in
particular for SMEs, when using an online survey tool. Hence, it would
be appropriate to also provide an offline alternative (e.g. in the form
of an ODS template that is provided to potential respondents so that
respondents can fill it in, print it, and sent it back via post
(landmail/airmail) to address privacy concerns).
I developed an online survey because we thought with Katerina it would have made easier the dissemination of the questionnaire and the collection/analysis of responses. However, if you think you'll need an .odt version of it for the SMEs, that's not going to be an issue.
Best,
Francesco
On 2019-06-14 18:44,
*francesco.agresta@dlearn.eu francesco.agresta@dlearn.eu*
wrote:
Dear partners,
I’m sending here the links for three impact surveys we are supposed to
send out and have filled in during these final months until the end of
the project.
They relate to three different target groups:
- SMEs
*https://foss4smes.limequery.com/1?lang=en https://foss4smes.limequery.com/1?lang=en*
- VET Centres, Trainers and Coaches7
*https://foss4smes.limequery.com/2?lang=en https://foss4smes.limequery.com/2?lang=en*
- Project partners (i.e. ourselves)
*https://foss4smes.limequery.com/3?lang=en https://foss4smes.limequery.com/3?lang=en*
The fourth target group to be surveyed will be “Other stakeholders”
(e.g. policy makers in digital education). They will be part of a
“formal consultation based on Intellectual Output 3”, which is still
under development.
However, this fourth group will be most probably approached exploiting
the occasion of the final conference in Brussels.
In addition, please find attached a template for the collection of SMEs
case studies showcasing the participants experience and improved
performance. We are supposed to collect 5 case studies per partner, 30
in total.
These activities relate to the “Impact” strategy described at page 62-63
of the proposal.
I have started updating the Quality Plan accordingly with all the
necessary information (you will find it in keybase), and it will be
finalised as soon as we are done also with the 4^th target group and the
self-diagnostic tool (which is supposed to depict the “before” situation
about participants).
Please have a look at the surveys and we will discuss them during our
monthly call coming next Tuesday.
Wish you a nice weekend,
Descrizione: Descrizione: dlearn <
*http://www.dlearn.eu/ http://www.dlearn.eu/*
Francesco Agresta
European Project Manager
European Digital Learning Network
Via Domenico Scarlatti, 30
20124 Milano
Mob. +39 3496027623
*francesco.agresta@dlearn.eu francesco.agresta@dlearn.eu*
<mailto:
*francesco.agresta@dlearn.eu francesco.agresta@dlearn.eu*
*www.dlearn.eu http://www.dlearn.eu*
<
*http://www.dlearn.eu/ http://www.dlearn.eu/*
Foss4smes-team mailing list
*Foss4smes-team@lists.fsfe.org Foss4smes-team@lists.fsfe.org*
*https://lists.fsfe.org/mailman/listinfo/foss4smes-team https://lists.fsfe.org/mailman/listinfo/foss4smes-team*
This mailing list is covered by the FSFE's Code of Conduct. All
participants are kindly asked to be excellent to each other:
*https://fsfe.org/about/codeofconduct https://fsfe.org/about/codeofconduct*
--
Foss4smes-team mailing list
*Foss4smes-team@lists.fsfe.org Foss4smes-team@lists.fsfe.org*
*https://lists.fsfe.org/mailman/listinfo/foss4smes-team https://lists.fsfe.org/mailman/listinfo/foss4smes-team*
This mailing list is covered by the FSFE's Code of Conduct. All
participants are kindly asked to be excellent to each other:
*https://fsfe.org/about/codeofconduct https://fsfe.org/about/codeofconduct*
--
*Sivan Pätsch*
Digital Policy Adviser
OpenForum Europe
tel +32 (0) 2 486 4151
mob +32 (0) 484 90 71 23
web http://www.openforumeurope.org Follow us on Twitter @OpenForumEurope
https://twitter.com/OpenForumEurope
OFE Limited, a private company with liability limited by guarantee Registered in England and Wales with number 05493935 Registered office: Claremont House, 1 Blunt Road, South Croydon, Surrey CR2 7PA, UK https://maps.google.com/?q=1+Blunt+Road,+South+Croydon,+Surrey+CR2+7PA,+UK&entry=gmail&source=g
[image: Avast logo] https://www.avast.com/antivirus
This email has been checked for viruses by Avast antivirus software. www.avast.com https://www.avast.com/antivirus
Foss4smes-team mailing list Foss4smes-team@lists.fsfe.org https://lists.fsfe.org/mailman/listinfo/foss4smes-team
This mailing list is covered by the FSFE's Code of Conduct. All participants are kindly asked to be excellent to each other: https://fsfe.org/about/codeofconduct
Hello, my replies below in red.
Regards,
Francesco
Da: Foss4smes-team foss4smes-team-bounces@lists.fsfe.org Per conto di Katerina Tsinari Inviato: venerdì 21 giugno 2019 16:36 A: FOSS4SMEs mailing list foss4smes-team@lists.fsfe.org Cc: Cosmas Vamvalis vamvalis@abe.gr Oggetto: Re: [FOSS4SMEs-team] R: FOSS4SMEs_ Impact surveys
Dear all,
its nice to receive detailed feedback from SKUNI, OFE and Ifigeneia. I will now add my own feedback plus some old notes I have from Ifigeneia on this issue for consideration.
* How can we track from which countries are the participants of the surveys? We need to prove certain numbers from each country. Good point - I’ll add a question on the country of origin. * How can we stay in contact with the cooperating respondents to fulfil the before and after surveys with the reflection statement that is required? Is it better to send these per e-mail? We need guidelines.
I don’t understand such a statement, nor the request for guidelines. You explicitly asked me for an online survey we could easily send out to our targets. As it happened with the research in IO1, results will then be presented and evaluated as an aggregate. I can’t see how nor why we should build individual profile of respondents. Please clarify your remark and state your expectations on that.
* Why did Franc. create the “participant survey” so as to serve the evaluation of the overall platform during piloting and testing? This is being done with O2/A4 and there are certain tools prepared by SKUNI for this purpose. Be careful not to mix those two different things. Because this is the way I thought a “participant survey” might look like. How can you survey participants on the (short-term) impact of a training platform? My idea is that you should ask them about design, features and contents. If you read the questions asked in this first matrix and those asked in the spreadsheet developed by SKUNI for O2/A4, you’ll notice that they do are different. And, in any case, my opinion is that we should have a smart approach while reading the proposal and look at the different sections as a whole, since there are horizontal sections that are not watertight compartments and actually do mix up and interact, like in this case.
Once again, if you don’t agree with what already developed, can you please clarify what are your expectations here and share your idea on this participant survey?
* If only 25 responses per survey are allowed in Limesurvey, we need to create the surveys for each country separately. This is one of the options, even if I would leave the door open for another solution that would allow us to manage 3 survey links instead of 18 (6x3).
* Apart from the “2 Participant Impact surveys” there are another 8 tools we need to use as a project team. Which is the relation of this survey to the rest of the tools? Has Dlearn made sure that there are no overlapps with the other tools? Here again, I’m sorry, I don’t get this point. Can you please explain it with better words?
As I said before, there sure are interactions between different headings of the project, because “Impact” is an horizontal section that covers different phases of a project implementation. This count of 9 tools is something you come up with as a means of simplification for internal management, but the application does not have this count of 9. In fact, you’ll see that some of the items you listed are already being implemented (i. e. stakeholders matrix, peer review process) and some of them can’t be listed as a separate “tool” (e.g., we agreed that the “number of ‘expressions of interest’/requests to use the FOSS4SMEs platform” should be included in the survey as an open question, so I can’t look at this as a single separate tool itself).
Please clarify.
* Dlearn should be careful not to mix these surveys with the surveys of Brian called “self-diagnostic tool” and “final participant evaluation survey”, which will be soon integrated in Moodle and are focused only on the training course and its lessons/knowledge acquired there. Dlearns surveys focus on the impact of all the results produced within this project. At least this is how I personally understand it. I’m sorry to read this point, since we discussed this during our last call only a few days ago and there was a general agreement on the fact that the “pre-diagnostic tool” would account for the “before”. No objections were made.
In any case, I see that we have then another “final participant evaluation survey”, which I now suppose should be treated as a different thing. So my count now is:
* Pre diagnostic tool; * Final participant evaluation survey; * Assessment tools (IO2/A4); * (9) Impact tools, including participant survey.
My question is: how are we going to deal with all these measures and report on them? What is the plan? How do they precisely differ from one another? I think everyone would need a clarification on how to proceed.
* Further, since we need f.e. “written evidence on concrete plans”, a “data metric document” etc., we need from Dlearn to clarify in which order and in which project-time we need to use each impact tool. This is to avoid confusion and questions along the finalisation of the project. All the surveys are to be sent after the course finalisation and release, asking participants to take it after completing the course. The project ends in October, so I don’t see a different answer from this, nor I could make a timeline if we still don’t know when the full platform will be available for the public. Once again, if you have a different view, please clarify.
* Concerning the number of words in your "free text questions", Ifigeneia suggested to reduce it from 2000 to 500 if possible. The limit is 2.000 characters, not words.
She also suggested (older notes I kept):
- to ask the questions in a smart way, in order to get the answers, we need;
- to have less free text questions; there are no free text questions apart from the ones explicitly required from the application.
- to remember that we will need to use the results/information give inside the final report as a proof of our work;
- tool nr.4 “written evidence of concrete plans” should be developed as a question inside the “after survey”, so that we cover it this way; done
- tool nr.5 “expressions of interest” should be developed as a question inside the “after survey”, so that we cover it this way; done
- tool nr.8 “Peer Review through staff” concerning the partners, each of us should write 5 lines on where they will use the project results afterwards. The stakeholders can do it in our Multiplier event in Brussels. Why? I mean, this is the same peer review process we have been implementing since the beginning of the project. To fulfil what required in this section of the application, I made another version of the survey addressed to ourselves as partners. What the stakeholders have to do with the peer review process, which is exclusively internal? Be careful to not mix those things. Such statements should be included in the “Exploitation Plan” instead.
- tool nr. 2 “performance data metric document before and after”, which is for SMEs, she suggested to develop it as a questionnaire in limesurvey. Either extra or inside the “Participant Impact surveys”. done
Concerning the numbers, the proposal gives numbers also for other categories of impact measurement or target group. They are just not in these 2 pages and one has to look for them elsewere. Ifigeneia and me have long thought and discussed these numbers and have already suggested to Dlearn the following numbers per partner:
Tool nr. 1: 15
Tool nr. 2: 15
Tool nr. 3: 5
Tool nr. 4: as many as possible
Tool nr. 5: as many as possible
Tool nr. 6: 40
Tool nr. 7: 30+
Since the rest of the team partners don’t know ATLs discussion with Dlearn on this so far, here I explain the numbering:
Tool nr. 1: Participant survey of 15 SMEs and VET trainers (before and after) including a reflection statement
Tool nr. 2: Completion of performance data/metric document by 15 SMEs (before and after)
Tool nr. 3: A set of 5 SME case studies from each partner (total 30) showcasing the participants experience and improved performance
Tool nr. 4: Written evidence of concrete plans and/or actual examples of new SMEs and VET trainers using the FOSS4SMEs platform by participants
Tool nr. 5: Number of ‘expressions of interest’/requests to use the FOSS4SMEs platform by other SMEs, VET trainers, Stakeholders and partners.
Tool nr. 6: a regional/national database of stakeholders and key contacts (stakeholders matrix) with at least 40 regional and national stakeholders with responsibility for VET/SME policy and development.
Tool nr. 7: A formal consultation exercise involving 30+ national and European policy-makers/key stakeholders based on the policy recommendations (O3) .
Tool nr. 8: A peer review, for which partners will nominate a staff member not directly involved in pilot activities to complete a questionnaire to provide feedback and comments on the projects tangible and intangible outcomes.
Tool nr. 9: A persuasive business case of how to make strategic use of FOSS and the use of open educational resources within VET – Can be prepared inside the new chapter of the updated Quality Plan. This is not a tool, nor a deliverable. Please read the whole paragraph of the application from where you isolated this sentence (p.61) and you’ll see that it’s the project as a whole that should be taken as a “persuasive business case”.
The coordinators suggestion – agreed with Dlearn- is to try to reach these numbers as good as we can. If we reach them, we can be optimistic to have a good evaluation during our Final Reporting period. Of course we can make a collective decision, after we receive feedback from TUD and FSFE (deadline was set for 21.06.).
Dear Francesco, when can we have the updated chapter inside the Quality plan and the tools ready? Is it possible by 28.06?
I can’t answer this question until we clarify all the points above and we get to a full and shared understanding of the whole picture.
Best,
Katerina
Στις Πέμ, 20 Ιουν 2019 στις 6:30 μ.μ., ο/η <francesco.agresta@dlearn.eu mailto:francesco.agresta@dlearn.eu > έγραψε:
Hi Sivan,
you are right, we have no numerical targets for the other categories.
The idea to put a number on them came out from the discussion between me and Katerina, because we thought it would have been better for everyone to have a reference target in order to try and present the same amount of results across the different countries.
I tried to keep the numbers as low as possible to avoid overclaiming, given the time we have left. But, again, this is for us to be decided, so let’s hear everyone’s opinion and make a collective decision.
Best,
Francesco Agresta
European Project Manager
European Digital Learning Network
Via Domenico Scarlatti, 30
20124 Milano
Mob. +39 3496027623
Email francesco.agresta@dlearn.eu mailto:francesco.agresta@dlearn.eu
http://www.dlearn.eu/ www.dlearn.eu
Da: Sivan Pätsch <sivan@openforumeurope.org mailto:sivan@openforumeurope.org > Inviato: giovedì 20 giugno 2019 16:17 A: francesco.agresta@dlearn.eu mailto:francesco.agresta@dlearn.eu ; 'FOSS4SMEs mailing list' <foss4smes-team@lists.fsfe.org mailto:foss4smes-team@lists.fsfe.org > Oggetto: Re: [FOSS4SMEs-team] Quality Plan v1.2
Hi Francesco,
Thanks for pointing to the update of the quality plan in the call on Tuesday.
I have reviewed chapter 7 on the impact according to our discussion on the call in regard to the number of representatives from the different target groups we want to receive input on the impact measurement.
I see that the application (p 62) points to five case studies per partner for the target group SMEs, but makes no prescription for any other category of impact measurement or target group. Is it therefore necessary to commit to a specific number for the other measurements and target groups? Would it be possible to make no binding commitment for the other categories or at least reduce the numbers significantly, as we have not promised five inputs for the other categories/target groups?
Best,
Sivan
Στις Πέμ, 20 Ιουν 2019 στις 5:16 μ.μ., ο/η <francesco.agresta@dlearn.eu mailto:francesco.agresta@dlearn.eu > έγραψε:
Dear Ifigenia, Sivan and all,
to be honest I wasn’t aware of the SUS label, but the concept is right.
That set of questions come from the need to create a “participant survey” as required by the proposal, which could also serve to evaluate the overall platform during piloting and testing.
However, there are currently 15 questions in the matrix, so if everyone agrees I could cut them to 10 to make it simpler and adapt to the SUS methodology.
Another option might be to remove this matrix from the surveys addressed externally (i.e. SMEs and VET) and leave it only for our internal testing (i.e. the third survey dedicated to project partners).
Reduction of questions in the second matrix + indication of the unit number -> yes, this could be easily adjusted too.
On a more practical side, I have just found out that unfortunately the free version of LimeSurvey allows us only 25 responses per survey created. I’m sorry, I wasn’t aware of that.
Do someone of you (maybe the partners more expert in the FOSS world) have any advice on other free and open source tool to create online surveys that could be fit for our purpose?
Thank you,
Francesco Agresta
European Project Manager
European Digital Learning Network
Via Domenico Scarlatti, 30
20124 Milano
Mob. +39 3496027623
Email mailto:francesco.agresta@dlearn.eu francesco.agresta@dlearn.eu
http://www.dlearn.eu/ www.dlearn.eu
Da: Ifigeneia Metaxa <metaxa@abe.gr mailto:metaxa@abe.gr > Inviato: giovedì 20 giugno 2019 15:46 A: 'Sivan Pätsch' <sivan@openforumeurope.org mailto:sivan@openforumeurope.org >; 'Francesco Agresta' <francesco.agresta@dlearn.eu mailto:francesco.agresta@dlearn.eu >; 'Jonas Gamalielsson' <jonas.gamalielsson@his.se mailto:jonas.gamalielsson@his.se > Cc: foss4smes-team@lists.fsfe.org mailto:foss4smes-team@lists.fsfe.org Oggetto: RE: [FOSS4SMEs-team] FOSS4SMEs_ Impact surveys
Dear all,
If I get it right, these questions come from SUS (eg. https://www.usability.gov/how-to-and-tools/methods/system-usability-scale.ht...) and, thus, should be used as they are, if we want to derive a result on the usability, based on the scale of this methodology. The intention is for the same issue to be addressed/asked in both a “positive” and “negative” way, in order to make sure that the user has a clear understanding and does not respond mechanically. Again, I am not that deep in the project, you know best.
Best regards,
Ifigeneia
From: Foss4smes-team [mailto:foss4smes-team-bounces@lists.fsfe.org] On Behalf Of Sivan Patsch Sent: Thursday, June 20, 2019 4:32 PM To: Francesco Agresta; Jonas Gamalielsson Cc: foss4smes-team@lists.fsfe.org mailto:foss4smes-team@lists.fsfe.org Subject: Re: [FOSS4SMEs-team] FOSS4SMEs_ Impact surveys
Hi Francesco,
I agree with Jonas and Björn that the number of questions could become an issue to for the response rate. Specifically, in the matrix, there is some potential to remove questions that look at the same aspect from a different angle.
e.g.:
"The learning platform was easy to use" and "I found the learning platform unnecessarily complex"
-> a similar and could be one question
"I felt very confident using the platform"
-> not sure if the confidence of the reader is relevant for us
There are also some formatting issues where there seems automated hyphenation (line breaks within one word) happening which does not help ease of reading.
I don't know if it will be possible, but maybe you can also reduce the questions for the different units of the course? A help could also be to add the number of the unit so it's a bit clearer.
Best,
Sivan
On Tue, 2019-06-18 at 18:15 +0200, Francesco Agresta wrote:
Dear Jonas and Bjorn, thank you for your prompt and valuable feedback.
Il 18 giugno 2019 alle 17.33 Jonas Gamalielsson < jonas.gamalielsson@his.se mailto:jonas.gamalielsson@his.se
ha scritto:
We have checked the three surveys and notice that the text for questions are right justified, and we think they should be left justified. Several questions have formatting issues which may significantly inhibit respondents from filling in the survey.
I see, I didn't select a right justification so I guess that the Limesurvey system did it automatically. I have started to use it only recently, so I can try to fix it.
We don't understand how the questions on the survey page "FOSS4SMEs Impact" relate to impact.
In general, few questions actually address the purpose of the survey, as stated on the first page of the survey(s): "This survey has been developed to assess the impact of the FOSS4SMEs main outputs on participating SMEs.". Most questions are at the level of the individual experiences rather than at the organisational (SME) level.
I get your point, which is all the way reasonable. However, I believe that we have to keep in mind the scope of this project and our present status. Do you believe it would be possible to report on an impact at organisational level over the remaining implementation time, also given the fact that the course has still not be released? I can't see this happening right now, but please give me input if I'm wrong. This is why I thought the easiest way to get away with this task would be to keep an individual approach.
Further, we fear that the large number of questions may reduce the response rate.
I could take out some of the questions in the two matrixes, but I'm afraid the free-text questions at the end of the survey have to stay because they are specifically asked in the proposal.
We also find that there is a significant risk for low response rate, in particular for SMEs, when using an online survey tool. Hence, it would be appropriate to also provide an offline alternative (e.g. in the form of an ODS template that is provided to potential respondents so that respondents can fill it in, print it, and sent it back via post (landmail/airmail) to address privacy concerns).
I developed an online survey because we thought with Katerina it would have made easier the dissemination of the questionnaire and the collection/analysis of responses. However, if you think you'll need an .odt version of it for the SMEs, that's not going to be an issue.
Best, Francesco
On 2019-06-14 18:44, francesco.agresta@dlearn.eu mailto:francesco.agresta@dlearn.eu
wrote:
Dear partners,
I’m sending here the links for three impact surveys we are supposed to send out and have filled in during these final months until the end of the project.
They relate to three different target groups:
1. SMEs
https://foss4smes.limequery.com/1?lang=en
2. VET Centres, Trainers and Coaches7
https://foss4smes.limequery.com/2?lang=en
3. Project partners (i.e. ourselves)
https://foss4smes.limequery.com/3?lang=en
The fourth target group to be surveyed will be “Other stakeholders” (e.g. policy makers in digital education). They will be part of a “formal consultation based on Intellectual Output 3”, which is still under development.
However, this fourth group will be most probably approached exploiting the occasion of the final conference in Brussels.
In addition, please find attached a template for the collection of SMEs case studies showcasing the participants experience and improved performance. We are supposed to collect 5 case studies per partner, 30 in total.
These activities relate to the “Impact” strategy described at page 62-63 of the proposal.
I have started updating the Quality Plan accordingly with all the necessary information (you will find it in keybase), and it will be finalised as soon as we are done also with the 4^th target group and the self-diagnostic tool (which is supposed to depict the “before” situation about participants).
Please have a look at the surveys and we will discuss them during our monthly call coming next Tuesday.
Wish you a nice weekend,
Descrizione: Descrizione: dlearn < http://www.dlearn.eu/
Francesco Agresta
European Project Manager
European Digital Learning Network
Via Domenico Scarlatti, 30
20124 Milano
Mob. +39 3496027623
Email francesco.agresta@dlearn.eu mailto:francesco.agresta@dlearn.eu
<mailto: francesco.agresta@dlearn.eu mailto:francesco.agresta@dlearn.eu
www.dlearn.eu http://www.dlearn.eu
_______________________________________________ Foss4smes-team mailing list Foss4smes-team@lists.fsfe.org mailto:Foss4smes-team@lists.fsfe.org
https://lists.fsfe.org/mailman/listinfo/foss4smes-team
This mailing list is covered by the FSFE's Code of Conduct. All participants are kindly asked to be excellent to each other: https://fsfe.org/about/codeofconduct
--
_______________________________________________ Foss4smes-team mailing list Foss4smes-team@lists.fsfe.org mailto:Foss4smes-team@lists.fsfe.org
https://lists.fsfe.org/mailman/listinfo/foss4smes-team
This mailing list is covered by the FSFE's Code of Conduct. All participants are kindly asked to be excellent to each other: https://fsfe.org/about/codeofconduct
Dear Francesco,
thank you for your thoughts. My replies below in blue.
Thank you for the hard work and for finishing this as soon as possible.
Regards,
Katerina
Στις Παρ, 21 Ιουν 2019 στις 8:05 μ.μ., ο/η francesco.agresta@dlearn.eu έγραψε:
Hello, my replies below in red.
Regards,
Francesco
*Da:* Foss4smes-team foss4smes-team-bounces@lists.fsfe.org *Per conto di *Katerina Tsinari *Inviato:* venerdì 21 giugno 2019 16:36 *A:* FOSS4SMEs mailing list foss4smes-team@lists.fsfe.org *Cc:* Cosmas Vamvalis vamvalis@abe.gr *Oggetto:* Re: [FOSS4SMEs-team] R: FOSS4SMEs_ Impact surveys
Dear all,
its nice to receive detailed feedback from SKUNI, OFE and Ifigeneia. I will now add my own feedback plus some old notes I have from Ifigeneia on this issue for consideration.
- How can we track from which countries are the participants of the
surveys? We need to prove certain numbers from each country. Good point - I’ll add a question on the country of origin.
- GOOD
- How can we stay in contact with the cooperating respondents to
fulfil the before and after surveys with the reflection statement that is required? Is it better to send these per e-mail? We need guidelines.
I don’t understand such a statement, nor the request for guidelines. You explicitly asked me for an online survey we could easily send out to our targets. As it happened with the research in IO1, results will then be presented and evaluated as an aggregate. I can’t see how nor why we should build individual profile of respondents. Please clarify your remark and state your expectations on that.
- ITS CORRECT THAT YOU DEVELOPED A SURVEY. WE DON’T NEED TO CREATE INDIVIDUAL PROFILES OF RESPONDENTS. WE NEED TO MAKE SURE THAT WE ASK THE SAME PEOPLE BEFORE AND AFTER. IF WE GET AN E-MAIL ADRESS FROM THEM IN THE “BEFORE” SURVEY, WE CAN USE IT FOR THE “AFTER” SURVEY. THIS IS WHY I ASKED YOU TO COOPERATE WITH BRIAN ON THIS AND NOTIFY ME THAT THIS IS SECURED.
- Why did Franc. create the “participant survey” so as to serve the
evaluation of the overall platform during piloting and testing? This is being done with O2/A4 and there are certain tools prepared by SKUNI for this purpose. Be careful not to mix those two different things. Because this is the way I thought a “participant survey” might look like. How can you survey participants on the (short-term) impact of a training platform? My idea is that you should ask them about design, features and contents. If you read the questions asked in this first matrix and those asked in the spreadsheet developed by SKUNI for O2/A4, you’ll notice that they do are different. And, in any case, my opinion is that we should have a smart approach while reading the proposal and look at the different sections as a whole, since there are horizontal sections that are not watertight compartments and actually do mix up and interact, like in this case. Once again, if you don’t agree with what already developed, can you please clarify what are your expectations here and share your idea on this participant survey?
- IF THE QUESTIONS ARE DIFFERENT IN THE TWO TOOLS, THEN WE ARE OK.
SKUNI’S TOOLS ASK REPRESENTATIVES OF A TARGET GROUP (15 PER COUNTRY) THEIR OPINION ABOUT THE ENTIRE TRAINING SYSTEM AND ONE VET PROVIDER TO TEST THE APPLICABILITY AND EXPLOITABILITY OF THE COURSE WITHIN THE VET SECTOR AT LOCAL LEVEL. DLEARN’S IMPACT TOOLS CHECK FOR SMES FOR EXAMPLE IF SMES IMPROVED THEIR DIGITAL SKILLS AND FOSS UNDERSTANDING, THEIR COMPETITIVENESS AND HAVE NOW AN IMPROVED/MEASURABLE PRODUCTIVITY AND/OR PERFORMANCE ETC. IF THIS IS THE CASE WITH ALL THE TARGET GROUPS, WE ARE MORE THAN FINE HERE.
- If only 25 responses per survey are allowed in Limesurvey, we need to create the surveys for each country separately. This is one of the options, even if I would leave the door open for another solution that would allow us to manage 3 survey links instead of 18 (6x3).
- We cannot wait until the other partners find a solution to this. Please proceed as fast as you can with this. Please decide what is best here. - Apart from the “2 Participant Impact surveys” there are another 8 tools we need to use as a project team. Which is the relation of this survey to the rest of the tools? Has Dlearn made sure that there are no overlapps with the other tools? Here again, I’m sorry, I don’t get this point. Can you please explain it with better words? As I said before, there sure are interactions between different headings of the project, because “Impact” is an horizontal section that covers different phases of a project implementation. This count of 9 tools is something you come up with as a means of simplification for internal management, but the application does not have this count of 9. In fact, you’ll see that some of the items you listed are already being implemented (i. e. stakeholders matrix, peer review process) and some of them can’t be listed as a separate “tool” (e.g., we agreed that the “number of ‘expressions of interest’/requests to use the FOSS4SMEs platform” should be included in the survey as an open question, so I can’t look at this as a single separate tool itself). Please clarify.
- Please present the 9 (or 8) tools in the Quality Plan in the simplified way I showed you already in our previous discussions. You can then write to the side, that these points are covered f.e. inside this survey, or through this process etc and tis is why you don’t analyse them further. Important here is to show, that you are aware of all these points of the proposal and how these are connected to each other. Similarly, important is that you make clear which are the remaining points for the partners to cover. When a partner (and the evaluator in the end) reads this new chapter in the Quality Plan, he should have no further questions for me or you. This is our goal. You have already started this in the Quality Plan and you just need to enrich it accordingly. It will be super then.
- Dlearn should be careful not to mix these surveys with the surveys of Brian called “self-diagnostic tool” and “final participant evaluation survey”, which will be soon integrated in Moodle and are focused only on the training course and its lessons/knowledge acquired there. Dlearns surveys focus on the impact of all the results produced within this project. At least this is how I personally understand it. I’m sorry to read this point, since we discussed this during our last call only a few days ago and there was a general agreement on the fact that the “pre-diagnostic tool” would account for the “before”. No objections were made. --> I made no objection at the call, because I hoped I understood you wrongly. But, since you both agreed on this, I step back, and we proceed as you think its best, by using the “pre-course survey” of Brian as your “BEFORE-IMPACT participant survey”. This is already developed in Moodle by me and you can take a look. Please inform me and Brian if you want to have any changes by 27.06. In any case, I see that we have then another “final participant evaluation survey”, which I now suppose should be treated as a different thing.
- We have a post-course survey, which checks what the learner has learned (might be connected to a certificate) and a “final participant evaluation survey”, which gets feedback from the participant on the whole course. The second one could be used as your “AFTER- impact survey” if you like. Please inform me and Brian if you want to have any changes by 27.06. This is why I have asked you to cooperate with each other for so long. So that we don’t do double work and see what is more convenient for us and our respondents.
So my count now is:
- Pre diagnostic tool; - Final participant evaluation survey; - Assessment tools (IO2/A4); - (9) Impact tools, including participant survey.
My question is: how are we going to deal with all these measures and report on them? What is the plan? How do they precisely differ from one another? I think everyone would need a clarification on how to proceed.
- How we deal with them and their differences is written in each deliverable. Maybe if we had now your updated Quality Plan, this would all be clear to you and the team. We would all know then how to proceed. I have offered you months ago to check your impact tools and updated Quality Plan before showing them to the team, but you decided to share what you had prepared recently. I have no problem with this decision. It seems only risky to me to create confusion. I suggest that you follow my guidelines and I am sure we will have a great result in the end to present to the team. We can discuss the fine details but not everything. There is no time for such discussions anyway.
- Further, since we need f.e. “written evidence on concrete plans”, a “data metric document” etc., we need from Dlearn to clarify in which order and in which project-time we need to use each impact tool. This is to avoid confusion and questions along the finalisation of the project. All the surveys are to be sent after the course finalisation and release, asking participants to take it after completing the course. The project ends in October, so I don’t see a different answer from this, nor I could make a timeline if we still don’t know when the full platform will be available for the public. Once again, if you have a different view, please clarify.
- The platform will be available to the public as soon as your tools and Brians tools and the partners updates are ready. We then only need to upload the videos and create the other user scenarios. When the tools are explained in the Quality Plan, each partner will know when to use each tool. I suggested that you create a time plan, if you saw that this is necessary for the partners to understand how they need to proceed (to avoid receiving questions). This was the only reason.
- Concerning the number of words in your "free text questions", Ifigeneia suggested to reduce it from 2000 to 500 if possible. The limit is 2.000 characters, not words.
- SHE MENT CHARACTERS.
She also suggested (older notes I kept):
- to ask the questions in a smart way, in order to get the answers, we need;
- to have less free text questions; there are no free text questions apart from the ones explicitly required from the application.
- GOOD
- to remember that we will need to use the results/information give inside the final report as a proof of our work;
- tool nr.4 “written evidence of concrete plans” should be developed as a question inside the “after survey”, so that we cover it this way; done
- GOOD
- tool nr.5 “expressions of interest” should be developed as a question inside the “after survey”, so that we cover it this way; done
- GOOD
- tool nr.8 “Peer Review through staff” concerning the partners, each of us should write 5 lines on where they will use the project results afterwards. The stakeholders can do it in our Multiplier event in Brussels. Why? I mean, this is the same peer review process we have been implementing since the beginning of the project. To fulfil what required in this section of the application, I made another version of the survey addressed to ourselves as partners. What the stakeholders have to do with the peer review process, which is exclusively internal? Be careful to not mix those things. Such statements should be included in the “Exploitation Plan” instead.
- This is how Ifigeneia understood this point. Since you are the Quality Manager, we follow your suggestion to work with the same process. No problem. Just write it in the Quality Plan to have it clearly stated.
- tool nr. 2 “performance data metric document before and after”, which is for SMEs, she suggested to develop it as a questionnaire in limesurvey. Either extra or inside the “Participant Impact surveys”. done
- GOOD
Concerning the numbers, the proposal gives numbers also for other categories of impact measurement or target group. They are just not in these 2 pages and one has to look for them elsewere. Ifigeneia and me have long thought and discussed these numbers and have already suggested to Dlearn the following numbers per partner:
Tool nr. 1: 15
Tool nr. 2: 15
Tool nr. 3: 5
Tool nr. 4: as many as possible
Tool nr. 5: as many as possible
Tool nr. 6: 40
Tool nr. 7: 30+
Since the rest of the team partners don’t know ATLs discussion with Dlearn on this so far, here I explain the numbering:
Tool nr. 1: Participant survey of 15 SMEs and VET trainers (before and after) including a reflection statement
Tool nr. 2: Completion of performance data/metric document by 15 SMEs (before and after)
Tool nr. 3: A set of 5 SME case studies from each partner (total 30) showcasing the participants experience and improved performance
Tool nr. 4: Written evidence of concrete plans and/or actual examples of new SMEs and VET trainers using the FOSS4SMEs platform by participants
Tool nr. 5: Number of ‘expressions of interest’/requests to use the FOSS4SMEs platform by other SMEs, VET trainers, Stakeholders and partners.
Tool nr. 6: a regional/national database of stakeholders and key contacts (stakeholders matrix) with at least 40 regional and national stakeholders with responsibility for VET/SME policy and development.
Tool nr. 7: A formal consultation exercise involving 30+ national and European policy-makers/key stakeholders based on the policy recommendations (O3) .
Tool nr. 8: A peer review, for which partners will nominate a staff member not directly involved in pilot activities to complete a questionnaire to provide feedback and comments on the projects tangible and intangible outcomes.
Tool nr. 9: A persuasive business case of how to make strategic use of FOSS and the use of open educational resources within VET – Can be prepared inside the new chapter of the updated Quality Plan. This is not a tool, nor a deliverable. Please read the whole paragraph of the application from where you isolated this sentence (p.61) and you’ll see that it’s the project as a whole that should be taken as a “persuasive business case”.
- ok, leave this out then.
The coordinators suggestion – agreed with Dlearn- is to try to reach these numbers as good as we can. If we reach them, we can be optimistic to have a good evaluation during our Final Reporting period. Of course we can make a collective decision, after we receive feedback from TUD and FSFE (deadline was set for 21.06.).
Dear Francesco, when can we have the updated chapter inside the Quality plan and the tools ready? Is it possible *by 28.06?*
I can’t answer this question until we clarify all the points above and we get to a full and shared understanding of the whole picture.
- Dear Francesco, I need to ask you again, since time is pushing. When can we have the updated chapter inside the Quality plan and the tools ready? Is it possible *by 28.06?*
Best,
Katerina
Στις Πέμ, 20 Ιουν 2019 στις 6:30 μ.μ., ο/η francesco.agresta@dlearn.eu έγραψε:
Hi Sivan,
you are right, we have no numerical targets for the other categories.
The idea to put a number on them came out from the discussion between me and Katerina, because we thought it would have been better for everyone to have a reference target in order to try and present the same amount of results across the different countries.
I tried to keep the numbers as low as possible to avoid overclaiming, given the time we have left. But, again, this is for us to be decided, so let’s hear everyone’s opinion and make a collective decision.
Best,
[image: Descrizione: Descrizione: dlearn] http://www.dlearn.eu/
Francesco Agresta
European Project Manager
European Digital Learning Network
Via Domenico Scarlatti, 30
20124 Milano
Mob. +39 3496027623
Email francesco.agresta@dlearn.eu
www.dlearn.eu
*Da:* Sivan Pätsch sivan@openforumeurope.org *Inviato:* giovedì 20 giugno 2019 16:17 *A:* francesco.agresta@dlearn.eu; 'FOSS4SMEs mailing list' < foss4smes-team@lists.fsfe.org> *Oggetto:* Re: [FOSS4SMEs-team] Quality Plan v1.2
Hi Francesco,
Thanks for pointing to the update of the quality plan in the call on Tuesday.
I have reviewed chapter 7 on the impact according to our discussion on the call in regard to the number of representatives from the different target groups we want to receive input on the impact measurement.
I see that the application (p 62) points to five case studies per partner for the target group SMEs, but makes no prescription for any other category of impact measurement or target group. Is it therefore necessary to commit to a specific number for the other measurements and target groups? Would it be possible to make no binding commitment for the other categories or at least reduce the numbers significantly, as we have not promised five inputs for the other categories/target groups?
Best,
Sivan
Στις Πέμ, 20 Ιουν 2019 στις 5:16 μ.μ., ο/η francesco.agresta@dlearn.eu έγραψε:
Dear Ifigenia, Sivan and all,
to be honest I wasn’t aware of the SUS label, but the concept is right.
That set of questions come from the need to create a “participant survey” as required by the proposal, which could also serve to evaluate the overall platform during piloting and testing.
However, there are currently 15 questions in the matrix, so if everyone agrees I could cut them to 10 to make it simpler and adapt to the SUS methodology.
Another option might be to remove this matrix from the surveys addressed externally (i.e. SMEs and VET) and leave it only for our internal testing (i.e. the third survey dedicated to project partners).
Reduction of questions in the second matrix + indication of the unit number -> yes, this could be easily adjusted too.
On a more practical side, I have just found out that unfortunately the free version of LimeSurvey allows us only 25 responses per survey created. I’m sorry, I wasn’t aware of that.
Do someone of you (maybe the partners more expert in the FOSS world) have any advice on other free and open source tool to create online surveys that could be fit for our purpose?
Thank you,
[image: Descrizione: Descrizione: dlearn] http://www.dlearn.eu/
Francesco Agresta
European Project Manager
European Digital Learning Network
Via Domenico Scarlatti, 30
20124 Milano
Mob. +39 3496027623
Email francesco.agresta@dlearn.eu
www.dlearn.eu
*Da:* Ifigeneia Metaxa metaxa@abe.gr *Inviato:* giovedì 20 giugno 2019 15:46 *A:* 'Sivan Pätsch' sivan@openforumeurope.org; 'Francesco Agresta' < francesco.agresta@dlearn.eu>; 'Jonas Gamalielsson' < jonas.gamalielsson@his.se> *Cc:* foss4smes-team@lists.fsfe.org *Oggetto:* RE: [FOSS4SMEs-team] FOSS4SMEs_ Impact surveys
Dear all,
If I get it right, these questions come from SUS (eg. https://www.usability.gov/how-to-and-tools/methods/system-usability-scale.ht...) and, thus, should be used as they are, if we want to derive a result on the usability, based on the scale of this methodology. The intention is for the same issue to be addressed/asked in both a “positive” and “negative” way, in order to make sure that the user has a clear understanding and does not respond mechanically. Again, I am not that deep in the project, you know best.
Best regards,
Ifigeneia
*From:* Foss4smes-team [mailto:foss4smes-team-bounces@lists.fsfe.org foss4smes-team-bounces@lists.fsfe.org] *On Behalf Of *Sivan Patsch *Sent:* Thursday, June 20, 2019 4:32 PM *To:* Francesco Agresta; Jonas Gamalielsson *Cc:* foss4smes-team@lists.fsfe.org *Subject:* Re: [FOSS4SMEs-team] FOSS4SMEs_ Impact surveys
Hi Francesco,
I agree with Jonas and Björn that the number of questions could become an issue to for the response rate. Specifically, in the matrix, there is some potential to remove questions that look at the same aspect from a different angle.
e.g.:
"The learning platform was easy to use" and "I found the learning platform unnecessarily complex"
-> a similar and could be one question
"I felt very confident using the platform"
-> not sure if the confidence of the reader is relevant for us
There are also some formatting issues where there seems automated hyphenation (line breaks within one word) happening which does not help ease of reading.
I don't know if it will be possible, but maybe you can also reduce the questions for the different units of the course? A help could also be to add the number of the unit so it's a bit clearer.
Best,
Sivan
On Tue, 2019-06-18 at 18:15 +0200, Francesco Agresta wrote:
Dear Jonas and Bjorn,
thank you for your prompt and valuable feedback.
Il 18 giugno 2019 alle 17.33 Jonas Gamalielsson <
*jonas.gamalielsson@his.se jonas.gamalielsson@his.se*
ha scritto:
We have checked the three surveys and notice that the text for questions
are right justified, and we think they should be left justified. Several
questions have formatting issues which may significantly inhibit
respondents from filling in the survey.
I see, I didn't select a right justification so I guess that the Limesurvey system did it automatically. I have started to use it only recently, so I can try to fix it.
We don't understand how the questions on the survey page "FOSS4SMEs
Impact" relate to impact.
In general, few questions actually address the purpose of the survey, as
stated on the first page of the survey(s): "This survey has been
developed to assess the impact of the FOSS4SMEs main outputs on
participating SMEs.". Most questions are at the level of the individual
experiences rather than at the organisational (SME) level.
I get your point, which is all the way reasonable. However, I believe that we have to keep in mind the scope of this project and our present status. Do you believe it would be possible to report on an impact at organisational level over the remaining implementation time, also given the fact that the course has still not be released?
I can't see this happening right now, but please give me input if I'm wrong. This is why I thought the easiest way to get away with this task would be to keep an individual approach.
Further, we fear that the large number of questions may reduce the
response rate.
I could take out some of the questions in the two matrixes, but I'm afraid the free-text questions at the end of the survey have to stay because they are specifically asked in the proposal.
We also find that there is a significant risk for low response rate, in
particular for SMEs, when using an online survey tool. Hence, it would
be appropriate to also provide an offline alternative (e.g. in the form
of an ODS template that is provided to potential respondents so that
respondents can fill it in, print it, and sent it back via post
(landmail/airmail) to address privacy concerns).
I developed an online survey because we thought with Katerina it would have made easier the dissemination of the questionnaire and the collection/analysis of responses. However, if you think you'll need an .odt version of it for the SMEs, that's not going to be an issue.
Best,
Francesco
On 2019-06-14 18:44,
*francesco.agresta@dlearn.eu francesco.agresta@dlearn.eu*
wrote:
Dear partners,
I’m sending here the links for three impact surveys we are supposed to
send out and have filled in during these final months until the end of
the project.
They relate to three different target groups:
- SMEs
*https://foss4smes.limequery.com/1?lang=en https://foss4smes.limequery.com/1?lang=en*
- VET Centres, Trainers and Coaches7
*https://foss4smes.limequery.com/2?lang=en https://foss4smes.limequery.com/2?lang=en*
- Project partners (i.e. ourselves)
*https://foss4smes.limequery.com/3?lang=en https://foss4smes.limequery.com/3?lang=en*
The fourth target group to be surveyed will be “Other stakeholders”
(e.g. policy makers in digital education). They will be part of a
“formal consultation based on Intellectual Output 3”, which is still
under development.
However, this fourth group will be most probably approached exploiting
the occasion of the final conference in Brussels.
In addition, please find attached a template for the collection of SMEs
case studies showcasing the participants experience and improved
performance. We are supposed to collect 5 case studies per partner, 30
in total.
These activities relate to the “Impact” strategy described at page 62-63
of the proposal.
I have started updating the Quality Plan accordingly with all the
necessary information (you will find it in keybase), and it will be
finalised as soon as we are done also with the 4^th target group and the
self-diagnostic tool (which is supposed to depict the “before” situation
about participants).
Please have a look at the surveys and we will discuss them during our
monthly call coming next Tuesday.
Wish you a nice weekend,
Descrizione: Descrizione: dlearn <
*http://www.dlearn.eu/ http://www.dlearn.eu/*
Francesco Agresta
European Project Manager
European Digital Learning Network
Via Domenico Scarlatti, 30
20124 Milano
Mob. +39 3496027623
*francesco.agresta@dlearn.eu francesco.agresta@dlearn.eu*
<mailto:
*francesco.agresta@dlearn.eu francesco.agresta@dlearn.eu*
*www.dlearn.eu http://www.dlearn.eu*
<
*http://www.dlearn.eu/ http://www.dlearn.eu/*
Foss4smes-team mailing list
*Foss4smes-team@lists.fsfe.org Foss4smes-team@lists.fsfe.org*
*https://lists.fsfe.org/mailman/listinfo/foss4smes-team https://lists.fsfe.org/mailman/listinfo/foss4smes-team*
This mailing list is covered by the FSFE's Code of Conduct. All
participants are kindly asked to be excellent to each other:
*https://fsfe.org/about/codeofconduct https://fsfe.org/about/codeofconduct*
--
Foss4smes-team mailing list
*Foss4smes-team@lists.fsfe.org Foss4smes-team@lists.fsfe.org*
*https://lists.fsfe.org/mailman/listinfo/foss4smes-team https://lists.fsfe.org/mailman/listinfo/foss4smes-team*
This mailing list is covered by the FSFE's Code of Conduct. All
participants are kindly asked to be excellent to each other:
*https://fsfe.org/about/codeofconduct https://fsfe.org/about/codeofconduct*
--
*Sivan Pätsch*
Digital Policy Adviser
OpenForum Europe
tel +32 (0) 2 486 4151
mob +32 (0) 484 90 71 23
web http://www.openforumeurope.org Follow us on Twitter @OpenForumEurope
https://twitter.com/OpenForumEurope
OFE Limited, a private company with liability limited by guarantee Registered in England and Wales with number 05493935 Registered office: Claremont House, 1 Blunt Road, South Croydon, Surrey CR2 7PA, UK https://maps.google.com/?q=1+Blunt+Road,+South+Croydon,+Surrey+CR2+7PA,+UK&entry=gmail&source=g
[image: Avast logo] https://www.avast.com/antivirus
This email has been checked for viruses by Avast antivirus software. www.avast.com https://www.avast.com/antivirus
Foss4smes-team mailing list Foss4smes-team@lists.fsfe.org https://lists.fsfe.org/mailman/listinfo/foss4smes-team
This mailing list is covered by the FSFE's Code of Conduct. All participants are kindly asked to be excellent to each other: https://fsfe.org/about/codeofconduct
--
[image: atlantis-logo-for-signatures]
*Katerina Tsinari*
*EU Projects consultant*
*Αntoni Tritsi 21, 570 01 Thessaloniki*
*T:*
*2310 233 266*
*Email:*
*tsinari@abe.gr elsianli@abe.gr*
*URL:*
*www.abe.gr http://www.abe.gr/*
*Skype:*
- tsinarikaterina@hotmail.de tsinarikaterina@hotmail.de*
https://www.linkedin.com/company/atlantis-engineering-sa https://twitter.com/EngAtlantis https://www.facebook.com/Atlantis-Engineering-SA-141993602518655 https://plus.google.com/u/0/112248896999799346483
Hi Francesco,
Just coming back from vacation so please bear with me if I missed an earlier answer to your problem.
~ francesco.agresta@dlearn.eu [2019-06-20 16:16 +0200]:
On a more practical side, I have just found out that unfortunately the free version of LimeSurvey allows us only 25 responses per survey created. I’m sorry, I wasn’t aware of that.
Do someone of you (maybe the partners more expert in the FOSS world) have any advice on other free and open source tool to create online surveys that could be fit for our purpose?
The FSFE hosts an installation of limesurvey, one of the best products for this case. You may remember that we also used it for the survey in O1.
If you want, I can create an account for you so you can create, edit and view surveys.
Best, Max
Hello Max, yes of course I remember we used that for the IO1 survey. It was your former colleague who set up the whole thing, so I just wasn't sure on how to do that again and I tried to find my way into the product.
It would be great if you could create this account, so we can stick to the tool.
Please let me know.
Thank you so much, Francesco
-----Messaggio originale----- Da: Foss4smes-team foss4smes-team-bounces@lists.fsfe.org Per conto di Max Mehl Inviato: mercoledì 26 giugno 2019 16:56 A: foss4smes-team@lists.fsfe.org Oggetto: Re: [FOSS4SMEs-team] R: FOSS4SMEs_ Impact surveys
Hi Francesco,
Just coming back from vacation so please bear with me if I missed an earlier answer to your problem.
~ francesco.agresta@dlearn.eu [2019-06-20 16:16 +0200]:
On a more practical side, I have just found out that unfortunately the free version of LimeSurvey allows us only 25 responses per survey created. I’m sorry, I wasn’t aware of that.
Do someone of you (maybe the partners more expert in the FOSS world) have any advice on other free and open source tool to create online surveys that could be fit for our purpose?
The FSFE hosts an installation of limesurvey, one of the best products for this case. You may remember that we also used it for the survey in O1.
If you want, I can create an account for you so you can create, edit and view surveys.
Best, Max
-- Max Mehl - Programme Manager - Free Software Foundation Europe Contact and information: https://fsfe.org/about/mehl | @mxmehl Become a supporter of software freedom: https://fsfe.org/join _______________________________________________ Foss4smes-team mailing list Foss4smes-team@lists.fsfe.org https://lists.fsfe.org/mailman/listinfo/foss4smes-team
This mailing list is covered by the FSFE's Code of Conduct. All participants are kindly asked to be excellent to each other: https://fsfe.org/about/codeofconduct
Hello Francesco,
~ francesco.agresta@dlearn.eu [2019-06-26 17:58 +0200]:
It would be great if you could create this account, so we can stick to the tool.
Sure! I have just created an account for you, for which you should have received an email with further instructions.
I have also created 3 surveys which you are the owner of, so you can edit all settings as you like. If you need more surveys, please just let me know.
Best, Max
Hi Francesco, Thanks for pointing to the update of the quality plan in the call on Tuesday. I have reviewed chapter 7 on the impact according to our discussion on the call in regard to the number of representatives from the different target groups we want to receive input on the impact measurement. I see that the application (p 62) points to five case studies per partner for the target group SMEs, but makes no prescription for any other category of impact measurement or target group. Is it therefore necessary to commit to a specific number for the other measurements and target groups? Would it be possible to make no binding commitment for the other categories or at least reduce the numbers significantly, as we have not promised five inputs for the other categories/target groups? Best,Sivan On Fri, 2019-06-14 at 18:44 +0200, francesco.agresta@dlearn.eu wrote:
Dear partners, I’m sending here the links for three impact surveys we are supposed to send out and have filled in during these final months until the end of the project.
They relate to three different target groups:
SMEs https://foss4smes.limequery.com/1?lang=en VET Centres, Trainers and Coaches7https://foss4smes.limequery.com/2?lang=en Project partners (i.e. ourselves) https://foss4smes.limequery.com/3?lang=en
The fourth target group to be surveyed will be “Other stakeholders” (e.g. policy makers in digital education). They will be part of a “formal consultation based on Intellectual Output 3”, which is still under development. However, this fourth group will be most probably approached exploiting the occasion of the final conference in Brussels.
In addition, please find attached a template for the collection of SMEs case studies showcasing the participants experience and improved performance. We are supposed to collect 5 case studies per partner, 30 in total.
These activities relate to the “Impact” strategy described at page 62-63 of the proposal. I have started updating the Quality Plan accordingly with all the necessary information (you will find it in keybase), and it will be finalised as soon as we are done also with the 4th target group and the self-diagnostic tool (which is supposed to depict the “before” situation about participants).
Please have a look at the surveys and we will discuss them during our monthly call coming next Tuesday.
Wish you a nice weekend,
Francesco Agresta
European Project Manager European Digital Learning Network Via Domenico Scarlatti, 30 20124 Milano Mob. +39 3496027623 Email francesco.agresta@dlearn.eu www.dlearn.eu
_______________________________________________Foss4smes-team mailing listFoss4smes-team@lists.fsfe.org https://lists.fsfe.org/mailman/listinfo/foss4smes-team
This mailing list is covered by the FSFE's Code of Conduct. Allparticipants are kindly asked to be excellent to each other: https://fsfe.org/about/codeofconduct
Hi Sivan,
you are right, we have no numerical targets for the other categories.
The idea to put a number on them came out from the discussion between me and Katerina, because we thought it would have been better for everyone to have a reference target in order to try and present the same amount of results across the different countries.
I tried to keep the numbers as low as possible to avoid overclaiming, given the time we have left. But, again, this is for us to be decided, so let’s hear everyone’s opinion and make a collective decision.
Best,
Francesco Agresta
European Project Manager
European Digital Learning Network
Via Domenico Scarlatti, 30
20124 Milano
Mob. +39 3496027623
Email francesco.agresta@dlearn.eu mailto:francesco.agresta@dlearn.eu
http://www.dlearn.eu/ www.dlearn.eu
Da: Sivan Pätsch sivan@openforumeurope.org Inviato: giovedì 20 giugno 2019 16:17 A: francesco.agresta@dlearn.eu; 'FOSS4SMEs mailing list' foss4smes-team@lists.fsfe.org Oggetto: Re: [FOSS4SMEs-team] Quality Plan v1.2
Hi Francesco,
Thanks for pointing to the update of the quality plan in the call on Tuesday.
I have reviewed chapter 7 on the impact according to our discussion on the call in regard to the number of representatives from the different target groups we want to receive input on the impact measurement.
I see that the application (p 62) points to five case studies per partner for the target group SMEs, but makes no prescription for any other category of impact measurement or target group. Is it therefore necessary to commit to a specific number for the other measurements and target groups? Would it be possible to make no binding commitment for the other categories or at least reduce the numbers significantly, as we have not promised five inputs for the other categories/target groups?
Best,
Sivan
On Fri, 2019-06-14 at 18:44 +0200, francesco.agresta@dlearn.eu mailto:francesco.agresta@dlearn.eu wrote:
Dear partners,
I’m sending here the links for three impact surveys we are supposed to send out and have filled in during these final months until the end of the project.
They relate to three different target groups:
1. SMEs https://foss4smes.limequery.com/1?lang=en 2. VET Centres, Trainers and Coaches7
https://foss4smes.limequery.com/2?lang=en
3. Project partners (i.e. ourselves)
https://foss4smes.limequery.com/3?lang=en
The fourth target group to be surveyed will be “Other stakeholders” (e.g. policy makers in digital education). They will be part of a “formal consultation based on Intellectual Output 3”, which is still under development.
However, this fourth group will be most probably approached exploiting the occasion of the final conference in Brussels.
In addition, please find attached a template for the collection of SMEs case studies showcasing the participants experience and improved performance. We are supposed to collect 5 case studies per partner, 30 in total.
These activities relate to the “Impact” strategy described at page 62-63 of the proposal.
I have started updating the Quality Plan accordingly with all the necessary information (you will find it in keybase), and it will be finalised as soon as we are done also with the 4th target group and the self-diagnostic tool (which is supposed to depict the “before” situation about participants).
Please have a look at the surveys and we will discuss them during our monthly call coming next Tuesday.
Wish you a nice weekend,
Francesco Agresta
European Project Manager
European Digital Learning Network
Via Domenico Scarlatti, 30
20124 Milano
Mob. +39 3496027623
Email francesco.agresta@dlearn.eu mailto:francesco.agresta@dlearn.eu
http://www.dlearn.eu/ www.dlearn.eu
_______________________________________________ Foss4smes-team mailing list
mailto:Foss4smes-team@lists.fsfe.org
Foss4smes-team@lists.fsfe.org mailto:Foss4smes-team@lists.fsfe.org
https://lists.fsfe.org/mailman/listinfo/foss4smes-team
https://lists.fsfe.org/mailman/listinfo/foss4smes-team
This mailing list is covered by the FSFE's Code of Conduct. All participants are kindly asked to be excellent to each other:
https://fsfe.org/about/codeofconduct
https://fsfe.org/about/codeofconduct
Hi Francesco,
Thanks for explaining your thoughts. I understand your suggestion. Based on this, I would argue that we can treat it as a reference (something to aspire to), but not as a binding target. The application does not specify a number, so in my mind, formally we should be fine if we have one "input" across all partners. Maybe ATL can expand on if and what expectations the national agency has set for these "no numbers defined" targets. I would really tend to not make commitments where we don't have to. Best, Sivan
-- Sivan Pätsch
Digital Policy Adviser he/him OpenForum Europe tel +32 (0) 2 486 4151 mob +32 (0) 484 90 71 23 web http://www.openforumeurope.org (http://www.openforumeurope.org/) Follow us on Twitter @OpenForumEurope (https://twitter.com/OpenForumEurope) -- OpenForum Europe AISBL Registered office: Avenue des Arts 56, Brussels 1000, Belgium On Jun 20 2019, at 5:30 pm, francesco.agresta@dlearn.eu wrote:
Hi Sivan,
you are right, we have no numerical targets for the other categories.
The idea to put a number on them came out from the discussion between me and Katerina, because we thought it would have been better for everyone to have a reference target in order to try and present the same amount of results across the different countries.
I tried to keep the numbers as low as possible to avoid overclaiming, given the time we have left. But, again, this is for us to be decided, so let’s hear everyone’s opinion and make a collective decision.
Best,
Francesco Agresta
European Project Manager
European Digital Learning Network
Via Domenico Scarlatti, 30
20124 Milano
Mob. +39 3496027623
Email francesco.agresta@dlearn.eu (mailto:francesco.agresta@dlearn.eu)
www.dlearn.eu (http://www.dlearn.eu/)
Foss4smes-team mailing list
Foss4smes-team@lists.fsfe.org (mailto:Foss4smes-team@lists.fsfe.org)
https://lists.fsfe.org/mailman/listinfo/foss4smes-team
This mailing list is covered by the FSFE's Code of Conduct. All participants are kindly asked to be excellent to each other:
https://fsfe.org/about/codeofconduct
--
Sivan Pätsch
Digital Policy Adviser
OpenForum Europe
tel +32 (0) 2 486 4151
mob +32 (0) 484 90 71 23
web http://www.openforumeurope.org (http://www.openforumeurope.org/) Follow us on Twitter @OpenForumEurope (https://twitter.com/OpenForumEurope) -- OFE Limited, a private company with liability limited by guarantee Registered in England and Wales with number 05493935 Registered office: Claremont House, 1 Blunt Road, South Croydon, Surrey CR2 7PA, UK (https://maps.google.com/?q=1+Blunt+Road,+South+Croydon,+Surrey+CR2+7PA,+UK&a...)