Published
2025-07-10
Section
Research Articles
License
Copyright (c) 2025 Xiangqin Dai, Mohd Najwadi Yusoff, Bingli Zhu, Xiao Zhang, Wujin Jiang, Lei Wang

This work is licensed under a Creative Commons Attribution 4.0 International License.
The journal adopts the Attribution-NonCommercial 4.0 International (CC BY-NC 4.0), which means that anyone can reuse and redistribute the materials for non-commercial purposes as long as you follow the license terms and the original source is properly cited.
Author(s) shall retain the copyright of their work and grant the Journal/Publisher rights for the first publication with the work concurrently licensed since 2023 Vol.8 No.2.
Under this license, author(s) will allow third parties to download, reuse, reprint, modify, distribute and/or copy the content under the condition that the authors are given credit. No permission is required from the authors or the publisher.
This broad license intends to facilitate free access, as well as the unrestricted use of original works of all types. This ensures that the published work is freely and openly available in perpetuity.
By providing open access, the following benefits are brought about:
- Higher Visibility, Availability and Citations-free and unlimited accessibility of the publication over the internet without any restrictions increases citation of the article.
- Ease of search-publications are easily searchable in search engines and indexing databases.
- Rapid Publication – accepted papers are immediately published online.
- Available for free download immediately after publication at https://esp.as-pub.com/index.php/ESP

Copyright Statement
1.The authors certify that the submitted manuscripts are original works, do not infringe the rights of others, are free from academic misconduct and confidentiality issues, and that there are no disputes over the authorship scheme of the collaborative articles. In case of infringement, academic misconduct and confidentiality issues, as well as disputes over the authorship scheme, all responsibilities will be borne by the authors.
2. The author agrees to grant the Editorial Office of Environment and Social Psychology a licence to use the reproduction right, distribution right, information network dissemination right, performance right, translation right, and compilation right of the submitted manuscript, including the work as a whole, as well as the diagrams, tables, abstracts, and any other parts that can be extracted from the work and used in accordance with the characteristics of the journal. The Editorial Board of Environment and Social Psychology has the right to use and sub-licence the above mentioned works for wide dissemination in print, electronic and online versions, and, in accordance with the characteristics of the periodical, for the period of legal protection of the property right of the copyright in the work, and for the territorial scope of the work throughout the world.
3. The authors are entitled to the copyright of their works under the relevant laws of Singapore, provided that they do not exercise their rights in a manner prejudicial to the interests of the Journal.
About Licence
Environment and Social Psychology is an open access journal and all published work is available under the Creative Commons Licence, Authors shall retain copyright of their work and grant the journal/publisher the right of first publication, and their work shall be licensed under the Attribution-NonCommercial 4.0 International (CC BY-NC 4.0).
Under this licence, the author grants permission to third parties to download, reuse, reprint, modify, distribute and/or copy the content with attribution to the author. No permission from the author or publisher is required.
This broad licence is intended to facilitate free access to and unrestricted use of original works of all kinds. This ensures that published works remain free and accessible in perpetuity. Submitted manuscripts, once accepted, are immediately available to the public and permanently accessible free of charge on the journal’s official website (https://esp.as-pub.com/index.php/ESP). Allowing users to read, download, copy, print, search for or link to the full text of the article, or use it for other legal purposes. However, the use of the work must retain the author's signature, be limited to non-commercial purposes, and not be interpretative.
Click to download <Agreement on the Licence for the Use of Copyright on Environmental and Social Psychology>.
How to Cite
Generating culturally-contextual chinese cyberbullying datasets: A GAN approach for social psychology research
Xiangqin Dai
1 School of Computer Sciences, Universiti Sains Malaysia, Pulau, Pinang, 11800, Malaysia 2 Key Laboratory of Intelligent Information Processing and Control, Chongqing Three Gorges University, Chongqing, 404000, China
Mohd Najwadi Yusoff
School of Computer Sciences, Universiti Sains Malaysia, Pulau, Pinang, 11800, Malaysia
Bingli Zhu
Key Laboratory of Intelligent Information Processing and Control, Chongqing Three Gorges University, Chongqing, 404000, China
Xiao Zhang
School of Computer Sciences, Universiti Sains Malaysia, Pulau, Pinang, 11800, Malaysia
Wujin Jiang
Key Laboratory of Intelligent Information Processing and Control, Chongqing Three Gorges University, Chongqing, 404000, China
Lei Wang
School of Computer Sciences, Universiti Sains Malaysia, Pulau, Pinang, 11800, Malaysia
DOI: https://doi.org/10.59429/esp.v10i7.3834
Keywords: generative adversarial networks; Chinese cyberbullying dataset; LeakGAN; cyberbullying
Abstract
Cyberbullying has become a growing concern with serious psychological and social consequences, including anxiety, depression, and disrupted online communities. Grounded in social psychology theories such as social learning and online disinhibition, cyberbullying is shaped by factors like anonymity and peer influence. However, the lack of Chinese-language cyberbullying datasets limits research and intervention efforts. To address this, we used four GAN models SeqGAN, RankGAN, MaliGAN, and LeakGAN to generate realistic Chinese cyberbullying text. LeakGAN outperformed the others, achieving a BLEU2 score of 0.948, self-BLEU2 of 0.963, NLL of 0.48, and the highest EmbSim values. Beyond technical performance, we emphasized psychological validity, cultural relevance, and ethical considerations in the data generation process. The findings have important implications for automated detection, intervention design, and social psychology research. Framed within ecological systems theory, this work also considers how online environments shape behavior. The synthetic dataset supports applications in schools, workplaces, and cross-cultural studies, though limitations remain in capturing the full complexity of real human behavior. Overall, LeakGAN’s success offers a strong foundation for future research on cyberbullying in digital contexts.
References
[1]. 1.Smith, P. K., Mahdavi, J., Carvalho, M., Fisher, S., Russell, S., and Tippett, N. (2008). Cyberbullying: its nature and impact in secondary school pupils. J. Child Psychol. Psychiatry 49, 376–385. doi:10. 1111/j.1469-7610.2007.01846.x
[2]. 2.Kaimba Frank.School intervention in peer cyberbullying: the case of high school students in Kafue District, Zambia [D]. Supervisor: MingHuo. Northeast Normal University,2023
[3]. 3.Ding Haoran. Research on Cyberbullying Problem of Minors and Its Governance [D]. Supervisor: Xu Lei. Nanjing University of Posts and Telecommunications,2023.
[4]. 4.Fu Brazier. An international comparative study of cyberbullying prevention programmes in primary schools[D]. Supervisor: QianSongling;Men Xinwei. Jilin University of Foreign Languages,2023.
[5]. 5.65.3% of surveyed youth said they or people around them have experienced cyber violence[EB/OL].(2023-6-20) https://baijiahao.baidu.com/s?id= 1769178755106899787&wfr=spider&for=pc
[6]. 6.Rao, M. E., and Rao, D. M. (2021). The mental health of high school students during the COVID- 19 pandemic. Front. Educ. 6, 719539. doi:10.3389/feduc.2021.719539
[7]. 7.Englander E. (2021). 3.5 social and mental health during the COVID- 19 ,pandemic. J. Am. Acad. Child Adolesc. Psychiat. 60, S147. doi: 10. 1016/j.jaac.2021. 09.039
[8]. 8.Paat, Y. F., and Markham, C. (2020). Digital crime, trauma, and abuse: Internet safety and cyber risks for adolescents and emerging adults in the 21st century. Soc. Work Ment. Health 19, 18–40. doi:10. 1080/15332985.2020.1845281
[9]. 9.Kim, Y. J., Qian, L., and Aslam, M. S. (2020). Development of a personalized mobile mental health intervention for workplace cyberbullying among health practitioners: protocol for a mixed methods study. JMIR Res. Protoc. 9, e23112. doi: 10.2196/23112
[10]. 10.Nochaiwong, S., Ruengorn, C., Thavorn, K., Hutton, B., Awiphan, R., Phosuya, C., et al. (2021). Global prevalence of mental health issues among the general population during the coronavirus disease-2019 pandemic: a systematic review and meta-analysis. Sci. Rep. 11, 1. doi:10. 1038/s41598-021-89700-8
[11]. 11.Kowalski, R. M., Toth, A., and Morgan, M. (2017). Bullying and cyberbullying in adulthood and the workplace. J. Soc. Psychol. 158,64–81. doi:10. 1080/00224545.2017.1302402
[12]. 12.Kowalski, R. M., Limber, S. P., and McCord, A. (2018). A developmental approachto cyberbullying: prevalence and protective factors. Aggress. Violent Behav. 45, 20–32.doi:10. 1016/j.avb.2018.02.009
[13]. 13.Goodfellow, I. J., et al. (2014). Generative Adversarial Nets. Advances in Neural Information Processing Systems.
[14]. 14.Andreini, P., Ciano, G., Bonechi, S., Graziani, C., Lachi, V., Mecocci, A., ... & Bianchini, M. (2021). A two-stage GAN for high-resolution retinal image generation and segmentation. Electronics, 11(1), 60.
[15]. 15.Huang, G., & Jafari, A. H. (2023). Enhanced balancing GAN: Minority-class image generation. Neural computing and applications,35(7), 5145-5154.
[16]. 16.Tran, N. T., Tran, V. H., Nguyen, N. B., Nguyen, T. K., & Cheung, N. M. (2021). On data augmentation for GAN training. IEEE Transactions on Image Processing, 30, 1882- 1897.
[17]. 17.Liu, Y. (2021). Improved generative adversarial network and its application in image oil painting style transfer. Image and Vision Computing, 105, 104087.
[18]. 18.Chen, Y., Zhang, H., Liu, L., Chen, X., Zhang, Q., Yang, K., ... & Xie, J. (2021). Research on image inpainting algorithm of improved GAN based on two-discriminations networks. Applied Intelligence, 51,3460-3474.
[19]. 19.Zhang, B., Gu, S., Zhang, B., Bao, J., Chen, D., Wen, F., ... & Guo, B. (2022). Styleswin: Transformer-based gan for high-resolution image generation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 11304- 11314).
[20]. 20.Singh, N. K., & Raza, K. (2021). Medical image generation using generative adversarial networks: A review. Health informatics: A computational perspective in healthcare, 77-96.
[21]. 21.Hossam, M., Le, T., Papasimeon, M., Huynh, V., & Phung, D. (2021). Text generation with deep variational GAN. arXiv preprint arXiv:2104.13488.
[22]. 22.Chen, Z., Zhu, T., Xiong, P., Wang, C., & Ren, W. (2021). Privacy preservation for image data: a gan‐based method. International Journal of Intelligent Systems, 36(4), 1668- 1685.
[23]. 23.Yu, L., Zhang, W., Wang, J., & Yu, Y. (2017, February). Seqgan: Sequence generative adversarial nets with policy gradient. In Proceedings of the AAAI conference on artificial intelligence (Vol. 31, No. 1).
[24]. 24.Guo, J., Lu, S., Cai, H., Zhang, W., Yu, Y., & Wang, J. (2018, April). Long text generation via adversarial training with leaked information. In Proceedings of the AAAI conference on artificial intelligence (Vol. 32, No. 1).
[25]. 25.Ding, X., Wang, Y., Xu, Z., Welch, W. J., & Wang, Z. J. (2021, May). Ccgan: Continuous conditional generative adversarial networks for image generation. In International conference on learning representations.
[26]. 26.Diao, S., Shen, X., Shum, K., Song, Y., & Zhang, T. (2021, August). TILGAN: transformer-based implicit latent GAN for diverse and coherent text generation. In Findings of the Association for Computational linguistics: ACL-IJCNLP 2021 (pp. 4844-4858).
[27]. 27.Yang, Y., Dan, X., Qiu, X., & Gao, Z. (2020). FGGAN: Feature-guiding generative adversarial networks for text generation. IEEE Access, 8, 105217- 105225.
[28]. 28.Deng Yang, Gao Kun, Liao Ning, Chen Yiran. Automatic script generation and optimisation technique based on generative adversarial network[J]. Digital Technology and Application,2024,42(02):232-234.
[29]. 29.LI Bing,YANG Peng,SUN Yuankang,HU Zhongjian,YI Meng. Advances and Challenges in Artificial Intelligence Text Generation (in English)[J].Frontiers of Information Technology & Electronic Engineering,2024,25(01):64-84.
[30]. 30.Xiong Lu,Pei Zhili,Jiang Mingyang & Bao Qiming. (2023). A text generation model based on improved generative adversarial network. Journal of Inner Mongolia University for Nationalities (Natural Science Edition) (02), 118- 123. doi:10. 14045/j.cnki.15- 1220.2023.02.005.
[31]. 31.Peng, P. F. & Zhou, L. R.. (2022). Adding reward to GRU adversarial network text generation model. Computers and Modernisation (07), 121- 126. doi:CNKI:SUN:JYXH.0.2022-07-019
[32]. 32.Tingting Zhao, Yajing Song, Guixi Li, Lina Wang, Yarui Chen, Dehua Ren. A review of text generation research based on deep reinforcement learning[J]. Journal of Tianjin University of Science and Technology,2022,37(02):71-80.DOI:10. 13364/j.issn.1672-6510.20210146.
[33]. 33.Q. Xue, X. F. Meng, F. Zhang, X. Y. Zhang, J. M. Zhu, Y. Zhu & D. D. Wang. (2022).HLMGAN: Hierarchical learning for multi-reward text generation adversarial networks. Journal of Yunnan University (Natural Science Edition)(01),64-72.
[34]. 34.Y. Y. Kang, D. L. Peng, Z. Chen & C. C. Liu. (2019).ED-GAN: A legal text generation model based on improved generative adversarial networks. Small Microcomputer Systems (05), 1020- 1025. doi:CNKI:SUN:XXWX.0.2019-05-021.
[35]. 35.Zhou, W., Ge, T., Xu, K., Wei, F., & Zhou, M. (2020). Self-adversarial learning with comparative discrimination for text generation. arXiv preprint arXiv:2001.11691.
[36]. 36.Mikolov T, Martin Karafiát, Burget L, et al.Recurrent neural network based language model[C]//Interspeech, Conference of the International Speech Communication Association, Makuhari, Chiba, Japan, September.DBLP, 2015.DOI:10. 1109/EIDWT.2013.25.
[37]. 37.Yu L, Zhang W, Wang J, et al. Seqgan: Sequence generative adversarial nets with policy gradient[C]//Proceedings of the AAAI conference on artificial intelligence. 2017, 31(1).
[38]. 38.Che T, Li Y, Zhang R, et al. Maximum-likelihood augmented discrete generative adversarial networks[J]. arXiv preprintarXiv:1702.07983, 2017.
[39]. 39.Lin K, Li D, He X, et al. Adversarial ranking for language generation[J]. Advances in neural information processing systems, 2017,30.
[40]. 40.Guo J, Lu S, Cai H, et al. Long text generation via adversarial training with leaked information[C]//Proceedings of the AAAI conference on artificial intelligence. 2018, 32(1).
[41]. 41.Zhu Y, Lu S, Zheng L, et al. Texygen: A benchmarking platform for text generation models[C]//The 41st international ACM SIGIR conference on research & development in information retrieval. 2018:1097- 1100.
[42]. 42.Papineni K, Roukos S, Ward T, et al. Bleu: a method for automatic evaluation of machine translation[C]//Proceedings of the 40th annual meeting of the Association for Computational Linguistics. 2002: 311-318.






