code check

This commit is contained in:
Trilarion 2020-08-11 14:01:21 +02:00
parent 1ca7c6c12d
commit 30a252a43f
16 changed files with 285 additions and 198 deletions

View File

@ -6,13 +6,6 @@ http://circularstudios.com/
http://cyxdown.free.fr/bs/
http://cyxdown.free.fr/f2b/
http://dead-code.org/home/
https://github.com/restorer/gloomy-dungeons-2
https://github.com/WohlSoft/PGE-Project
https://en.wikipedia.org/wiki/List_of_free_and_open-source_Android_applications#Games
https://notabug.org/Calinou/awesome-gamedev#games
https://forum.freegamedev.net/viewtopic.php?f=20&t=11627
https://www.old-games.ru/forum/threads/nekommercheskie-analogi-izvestnyx-igr.40868/page-9
https://github.com/MyreMylar/pygame_gui
http://e-adventure.e-ucm.es/login/index.php (games of eAdventure)
http://ethernet.wasted.ch/
http://evolonline.org/about
@ -173,6 +166,7 @@ https://en.wikipedia.org/w/index.php?title=Trigger_Rally&action=edit&redlink=1
https://en.wikipedia.org/wiki/Crystal_Space
https://en.wikipedia.org/wiki/GNOME_Games_Collection
https://en.wikipedia.org/wiki/List_of_commercial_video_games_with_available_source_code
https://en.wikipedia.org/wiki/List_of_free_and_open-source_Android_applications#Games
https://en.wikipedia.org/wiki/M.U.G.E.N
https://en.wikipedia.org/wiki/MUD#Spread (all there)
https://en.wikipedia.org/wiki/MUD_client (all there)
@ -186,6 +180,7 @@ https://enigma-dev.org/about.htm
https://faq.tuxfamily.org/Games/En
https://fedoraproject.org/wiki/SIGs/Games#List_of_games_we_will_NOT_package
https://flathub.org/home (use it for Linux packaging) / https://flathub.org/apps/category/Game
https://forum.freegamedev.net/viewtopic.php?f=20&t=11627
https://forums.scummvm.org/viewtopic.php?t=13512&highlight=open+source
https://freegamer.blogspot.com (maybe there is something interesting)
https://futurepinball.com/
@ -199,6 +194,8 @@ https://github.com/00-Evan/shattered-pixel-dungeon-gdx
https://github.com/acedogblast/Project-Uranium-Godot
https://github.com/AdaDoom3/AdaDoom3
https://github.com/AdamsLair/duality
https://github.com/adriengivry/Overload
https://github.com/aloisdeniel/awesome-monogame
https://github.com/Alzter/TuxBuilder
https://github.com/amerkoleci/Vortice.Windows
https://github.com/arturkot/the-house-game
@ -216,6 +213,7 @@ https://github.com/CatacombGames/
https://github.com/cflewis/Infinite-Mario-Bros
https://github.com/Chluverman/android-gltron
https://github.com/codenamecpp/carnage3d
https://github.com/coelckers/Raze
https://github.com/collections/game-engines (only OS)
https://github.com/collections/javascript-game-engines (only OS)
https://github.com/collections/pixel-art-tools (tools)
@ -229,9 +227,11 @@ https://github.com/Cortrah/SpaceOperaDesign, https://github.com/Cortrah/SpaceOpe
https://github.com/cping/LGame
https://github.com/cymonsgames/CymonsGames (collection)
https://github.com/DaanVanYperen/artemis-odb-contrib
https://github.com/danirod/jumpdontdie
https://github.com/David20321/7DFPS (http://www.wolfire.com/receiver, not open source, for rejected list)
https://github.com/DeflatedPickle/FAOSDance
https://github.com/delaford/game
https://github.com/DethRaid/SanityEngine
https://github.com/Donerkebap13/DonerComponents
https://github.com/Drasky-Vanderhoff/CommonDrops
https://github.com/EaW-Team/equestria_dev
@ -297,8 +297,10 @@ https://github.com/morganbengtsson/mos
https://github.com/MrFrenik/Enjon
https://github.com/MultiCraft/MultiCraft
https://github.com/MustaphaTR/Romanovs-Vengeance
https://github.com/MyreMylar/pygame_gui
https://github.com/ogarcia/opensudoku
https://github.com/OGRECave/scape
https://github.com/OpenHV/OpenHV
https://github.com/OpenMandrivaAssociation
https://github.com/OpenMandrivaAssociation/nexuiz/blob/master/nexuiz.spec
https://github.com/OpenRA/d2
@ -310,6 +312,7 @@ https://github.com/OSSGames (all there, but we should have them already)
https://github.com/Patapom/GodComplex
https://github.com/PavelDoGreat/WebGL-Fluid-Simulation
https://github.com/perbone/luascript
https://github.com/Phyronnaz/VoxelPlugin
https://github.com/pixijs/pixi.js
https://github.com/pld-linux
https://github.com/pld-linux/nexuiz/blob/master/nexuiz.spec
@ -323,6 +326,7 @@ https://github.com/rakugoteam/Rakugo
https://github.com/rds1983/Myra
https://github.com/redomar/JavaGame
https://github.com/Renanse/Ardor3D
https://github.com/restorer/gloomy-dungeons-2
https://github.com/RetroAchievements/RALibretro
https://github.com/RetroAchievements/RAWeb
https://github.com/rizwan3d/MotoGameEngine
@ -340,6 +344,7 @@ https://github.com/search?p=1&q=sunrider&type=Repositories, sunrider
https://github.com/senior-sigan/WHY_CPP
https://github.com/septag/glslcc
https://github.com/septag/rizz
https://github.com/sinshu/managed-doom
https://github.com/skypjack/entt
https://github.com/smlinux/nexuiz
https://github.com/SPC-Some-Polish-Coders/PopHead
@ -395,6 +400,7 @@ https://libregamewiki.org/index.php?title=Libregamewiki_talk:Community_Portal&ol
https://libregamewiki.org/Libregamewiki:Suggested_games#Likely_sources_for_more_free_games
https://lmemsm.dreamwidth.org/8013.html (List of some of my favorite Open Source games)
https://love2d.org/forums/viewforum.php?f=14 (check them if time)
https://notabug.org/Calinou/awesome-gamedev#games
https://odr.chalmers.se/handle/20.500.12380/219006
https://osdn.net/softwaremap/trove_list.php?form_cat=80
https://packages.debian.org/sid/games/etw
@ -486,6 +492,7 @@ https://www.moddb.com/engines/sage-strategy-action-game-engine
https://www.moddb.com/mods/ (search for all)
https://www.musztardasarepska.pl/wgdown/
https://www.ness-engine.com/
https://www.old-games.ru/forum/threads/nekommercheskie-analogi-izvestnyx-igr.40868/page-9
https://www.openhub.net/ (search for games)
https://www.phpbb.com/
https://www.piston.rs/

View File

@ -1,6 +1,6 @@
"""
takes all gits that we have in the list and checks the master branch out, then collects some statistics:
- number of distinct comitters
- number of distinct committers
- list of commit dates
- number of commits
- language detection and lines of code counting on final state
@ -14,7 +14,7 @@ from utils.utils import *
if __name__ == "__main__":
# paths
file_path = os.path.realpath(os.path.dirname(__file__))
file_path = os.path.realpath(os.path.dirname(__file__))
archives_path = os.path.join(file_path, 'git_repositories.json')
temp_path = os.path.join(file_path, 'temp')
@ -40,11 +40,10 @@ if __name__ == "__main__":
info = subprocess_run(["git", "log", '--format="%an, %at, %cn, %ct"'])
info = info.split('\n')
info = info[:-1] # last line is empty
info = info[:-1] # last line is empty
number_commits = len(info)
info = [x.split(', ') for x in info]
commiters = set([x[0] for x in info])
print(' commits: {}, commiters {}'.format(number_commits, len(commiters)))
committers = set([x[0] for x in info])
print(' commits: {}, committers {}'.format(number_commits, len(committers)))

View File

@ -8,9 +8,11 @@ import re
from difflib import SequenceMatcher
from utils.utils import *
def similarity(a, b):
return SequenceMatcher(None, a, b).ratio()
if __name__ == "__main__":
similarity_threshold = 0.7
@ -44,4 +46,3 @@ if __name__ == "__main__":
print('{} maybe included in {}'.format(test_name, ', '.join(matches)))
else:
print('{} not included'.format(test_name))

View File

@ -34,8 +34,8 @@ def download_lgw_content():
while True:
text = requests.get(url).text
soup = BeautifulSoup(text, 'html.parser')
#categories = soup.find('div', id='mw-subcategories').find_all('li')
#categories = [(x.a['href'], x.a.string) for x in categories]
# categories = soup.find('div', id='mw-subcategories').find_all('li')
# categories = [(x.a['href'], x.a.string) for x in categories]
# game pages
pages = soup.find('div', id='mw-pages').find_all('li')
@ -89,7 +89,7 @@ def parse_lgw_content():
entry['external links'] = links
# get meta description
description = soup.find('meta', attrs={"name":"description"})
description = soup.find('meta', attrs={"name": "description"})
entry['description'] = description['content']
# parse gameinfobox
@ -138,7 +138,7 @@ def parse_lgw_content():
if 'Games' not in categories:
print(' "Games" not in categories')
else:
categories.remove('Games') # should be there
categories.remove('Games') # should be there
# strip games at the end
phrase = ' games'
categories = [x[:-len(phrase)] if x.endswith(phrase) else x for x in categories]
@ -148,7 +148,6 @@ def parse_lgw_content():
entries.append(entry)
# save entries
text = json.dumps(entries, indent=1)
utils.write_text(entries_file, text)
@ -185,6 +184,7 @@ def ignore_content(entries, fields, ignored):
entries[index] = entry
return entries
def remove_prefix_suffix(entries, fields, prefixes, suffixes):
if not isinstance(fields, tuple):
fields = (fields, )
@ -224,7 +224,7 @@ def remove_parenthized_content(entries, fields):
content = entry[field]
if not isinstance(content, list):
content = [content]
content = [re.sub(r'\([^)]*\)', '', c) for c in content] # remove parentheses content
content = [re.sub(r'\([^)]*\)', '', c) for c in content] # remove parentheses content
content = [x.strip() for x in content]
content = list(set(content))
entry[field] = content
@ -312,10 +312,10 @@ def clean_lgw_content():
entries = remove_parenthized_content(entries, ('assets license', 'code language', 'code license', 'engine', 'genre', 'last active', 'library'))
entries = remove_prefix_suffix(entries, ('code license', 'assets license'), ('"', 'GNU', ), ('"', '[3]', '[2]', '[1]', 'only'))
entries = replace_content(entries, ('code license', 'assets license'), 'GPL', ('General Public License', ))
entries = replace_content(entries, ('code license', 'assets license'), 'GPL-2.0', ('GPLv2', )) # for LGW GPLv2 would be the correct writing
entries = replace_content(entries, ('code license', 'assets license'), 'GPL-2.0', ('GPLv2', )) # for LGW GPLv2 would be the correct writing
entries = replace_content(entries, ('code license', 'assets license'), 'GPL-2', ('GPLv2', 'GPL v2', 'GPL version 2.0', 'GPL 2.0', 'General Public License v2', 'GPL version 2', 'Gplv2', 'GPL 2'))
entries = replace_content(entries, ('code license', 'assets license'), 'GPL-2', ('GPL v2 or later', 'GPL 2+', 'GPL v2+', 'GPL version 2 or later'))
entries = replace_content(entries, ('code license', 'assets license'), 'GPL-3.0', ('GPLv3', )) # for LGW GPLv3 would be the correct writing
entries = replace_content(entries, ('code license', 'assets license'), 'GPL-3.0', ('GPLv3', )) # for LGW GPLv3 would be the correct writing
entries = replace_content(entries, ('code license', 'assets license'), 'GPL-3', ('GPL v3', 'GNU GPL v3', 'GPL 3'))
entries = replace_content(entries, ('code license', 'assets license'), 'GPL-3', ('GPL v3+', 'GPL v.3 or later', 'GPL v3 or later'))
entries = replace_content(entries, ('code license', 'assets license'), 'Public domain', ('public domain', 'Public Domain'))
@ -343,7 +343,6 @@ def clean_lgw_content():
entries = ignore_content(entries, 'last active', ('2019', ))
entries = ignore_content(entries, 'platform', ('DOS', ))
# list for every unique field
print('\nfield contents after')
fields = sorted(list(unique_fields - set(('description', 'external links', 'dev home', 'forum', 'home', 'linux-packages', 'developer', 'chat', 'tracker', 'Latest release', 'name', 'repo', 'Release date', 'categories'))))
@ -373,4 +372,4 @@ if __name__ == "__main__":
# parse_lgw_content()
# stage three
clean_lgw_content()
clean_lgw_content()

View File

@ -26,18 +26,34 @@ import json
import os
from utils import constants, utils, osg
lgw_name_aliases = {'Eat the Whistle': 'Eat The Whistle', 'Scorched 3D': 'Scorched3D', 'Blob Wars Episode 1 : Metal Blob Solid': 'Blobwars: Metal Blob Solid', 'Adventure': 'Colossal Cave Adventure',
'Liquid War 6': 'Liquid War', 'Gusanos': 'GUSANOS', 'Corewars': 'Core War', 'FLARE': 'Flare', 'Vitetris': 'vitetris', 'Powder Toy': 'The Powder Toy', 'Asylum': 'SDL Asylum',
'Atanks': 'Atomic Tanks', 'HeXon': 'heXon', 'Unnethack': 'UnNetHack', 'Nova Pinball': 'NOVA PINBALL', 'Jump n Bump': "Jump'n'Bump", 'Blades of Exile': 'Classic Blades of Exile',
'Colobot': 'Colobot: Gold Edition', 'Dead Justice': 'Cat Mother Dead Justice', 'FreeDink': 'GNU FreeDink', 'FRaBs': 'fRaBs', 'Harmonist': 'Harmonist: Dayoriah Clan Infiltration', 'Iris2 3D Client - for Ultima Online': 'Iris2',
'Java Classic Role Playing Game': 'jClassicRPG', 'Osgg': 'OldSkool Gravity Game', 'PyRacerz': 'pyRacerz', 'Starfighter': 'Project: Starfighter',
'TORCS': 'TORCS, The Open Racing Car Simulator', 'Vertigo (game)': 'Vertigo', 'XInvaders3D': 'XInvaders 3D', 'LambdaRogue': 'LambdaRogue: The Book of Stars', 'Maniadrive': 'ManiaDrive',
lgw_name_aliases = {'Eat the Whistle': 'Eat The Whistle', 'Scorched 3D': 'Scorched3D',
'Blob Wars Episode 1 : Metal Blob Solid': 'Blobwars: Metal Blob Solid',
'Adventure': 'Colossal Cave Adventure',
'Liquid War 6': 'Liquid War', 'Gusanos': 'GUSANOS', 'Corewars': 'Core War', 'FLARE': 'Flare',
'Vitetris': 'vitetris', 'Powder Toy': 'The Powder Toy', 'Asylum': 'SDL Asylum',
'Atanks': 'Atomic Tanks', 'HeXon': 'heXon', 'Unnethack': 'UnNetHack',
'Nova Pinball': 'NOVA PINBALL', 'Jump n Bump': "Jump'n'Bump",
'Blades of Exile': 'Classic Blades of Exile',
'Colobot': 'Colobot: Gold Edition', 'Dead Justice': 'Cat Mother Dead Justice',
'FreeDink': 'GNU FreeDink', 'FRaBs': 'fRaBs', 'Harmonist': 'Harmonist: Dayoriah Clan Infiltration',
'Iris2 3D Client - for Ultima Online': 'Iris2',
'Java Classic Role Playing Game': 'jClassicRPG', 'Osgg': 'OldSkool Gravity Game',
'PyRacerz': 'pyRacerz', 'Starfighter': 'Project: Starfighter',
'TORCS': 'TORCS, The Open Racing Car Simulator', 'Vertigo (game)': 'Vertigo',
'XInvaders3D': 'XInvaders 3D', 'LambdaRogue': 'LambdaRogue: The Book of Stars',
'Maniadrive': 'ManiaDrive',
'Which Way Is Up': 'Which Way Is Up?'}
lgw_ignored_entries = ['Hetris', '8 Kingdoms', 'Antigravitaattori', 'Arena of Honour', 'Arkhart', 'Ascent of Justice', 'Balazar III', 'Balder3D', 'Barbie Seahorse Adventures', 'Barrage', 'Gnome Batalla Naval', 'Blocks',
'Brickshooter', 'Bweakfwu', 'Cheese Boys', 'Clippers', 'Codewars', 'CRAFT: The Vicious Vikings', 'DQM', 'EmMines', 'Eskimo-run', 'Feuerkraft', 'Fight or Perish', 'Flatland', 'Forest patrol', 'Free Reign', 'GalaxyMage',
'Gloss', 'GRUB Invaders', 'Howitzer Skirmish', 'Imperium: Sticks', 'Interstate Outlaws', 'GNOME Games', 'KDE Games', 'LegacyClone', 'Memonix', 'Ninjapix', 'Neverputt', 'Militia Defense', 'Sudoku86',
'Terminal Overload release history', 'Scions of Darkness', 'Sedtris', 'SilChess', 'SSTPong', 'Tesseract Trainer', 'TunnelWars', 'The Fortress']
lgw_ignored_entries = ['Hetris', '8 Kingdoms', 'Antigravitaattori', 'Arena of Honour', 'Arkhart', 'Ascent of Justice',
'Balazar III', 'Balder3D', 'Barbie Seahorse Adventures', 'Barrage', 'Gnome Batalla Naval',
'Blocks',
'Brickshooter', 'Bweakfwu', 'Cheese Boys', 'Clippers', 'Codewars', 'CRAFT: The Vicious Vikings',
'DQM', 'EmMines', 'Eskimo-run', 'Feuerkraft', 'Fight or Perish', 'Flatland', 'Forest patrol',
'Free Reign', 'GalaxyMage',
'Gloss', 'GRUB Invaders', 'Howitzer Skirmish', 'Imperium: Sticks', 'Interstate Outlaws',
'GNOME Games', 'KDE Games', 'LegacyClone', 'Memonix', 'Ninjapix', 'Neverputt', 'Militia Defense',
'Sudoku86',
'Terminal Overload release history', 'Scions of Darkness', 'Sedtris', 'SilChess', 'SSTPong',
'Tesseract Trainer', 'TunnelWars', 'The Fortress']
licenses_map = {'GPLv2': 'GPL-2.0', 'GPLv2+': 'GPL-2.0', 'GPLv3': 'GPL-3.0', 'GPLv3+': 'GPL-3.0'}
@ -45,6 +61,7 @@ licenses_map = {'GPLv2': 'GPL-2.0', 'GPLv2+': 'GPL-2.0', 'GPLv3': 'GPL-3.0', 'GP
def compare_sets(a, b, name, limit=None):
"""
:param limit:
:param a:
:param b:
:param name:
@ -79,15 +96,15 @@ if __name__ == "__main__":
lgw_entries = json.loads(text)
# eliminate the ignored entries
_ = [x['name'] for x in lgw_entries if x['name'] in lgw_ignored_entries] # those that will be ignored
_ = set(lgw_ignored_entries) - set(_) # those that shall be ignored minus those that will be ignored
_ = [x['name'] for x in lgw_entries if x['name'] in lgw_ignored_entries] # those that will be ignored
_ = set(lgw_ignored_entries) - set(_) # those that shall be ignored minus those that will be ignored
if _:
print('Can un-ignore {}'.format(_))
lgw_entries = [x for x in lgw_entries if x['name'] not in lgw_ignored_entries]
# perform name and code language replacements
_ = [x['name'] for x in lgw_entries if x['name'] in lgw_name_aliases.keys()] # those that will be renamed
_ = set(lgw_name_aliases.keys()) - set(_) # those that shall be renamed minus those that will be renamed
_ = [x['name'] for x in lgw_entries if x['name'] in lgw_name_aliases.keys()] # those that will be renamed
_ = set(lgw_name_aliases.keys()) - set(_) # those that shall be renamed minus those that will be renamed
if _:
print('Can un-rename {}'.format(_))
for index, lgw_entry in enumerate(lgw_entries):
@ -121,8 +138,8 @@ if __name__ == "__main__":
mandatory_fields = unique_fields.copy()
for lgw_entry in lgw_entries:
remove_fields = [field for field in mandatory_fields if field not in lgw_entry]
mandatory_fields -= set(remove_fields)
print('mandatory lgw fields: {}'.format(sorted(list(mandatory_fields ))))
mandatory_fields -= set(remove_fields)
print('mandatory lgw fields: {}'.format(sorted(list(mandatory_fields))))
# read our database
our_entries = osg.assemble_infos()
@ -148,7 +165,7 @@ if __name__ == "__main__":
print('\n')
for lgw_entry in lgw_entries:
lgw_name = lgw_entry['name']
is_included = False
for our_entry in our_entries:
our_name = our_entry['name']
@ -166,7 +183,7 @@ if __name__ == "__main__":
p += compare_sets(lgw_entry.get(key, []), our_entry.get(key, []), key)
# categories/keywords
#p += compare_sets(lgw_entry.get('categories', []), our_entry.get('keywords', []), 'categories/keywords')
# p += compare_sets(lgw_entry.get('categories', []), our_entry.get('keywords', []), 'categories/keywords')
# code language
key = 'code language'
@ -177,9 +194,12 @@ if __name__ == "__main__":
p += compare_sets(lgw_entry.get(key, []), our_entry.get(key, []), key)
# engine, library
p += compare_sets(lgw_entry.get('engine', []), our_entry.get('code dependencies', []), 'code dependencies', 'notthem')
p += compare_sets(lgw_entry.get('library', []), our_entry.get('code dependencies', []), 'code dependencies', 'notthem')
p += compare_sets(lgw_entry.get('engine', [])+lgw_entry.get('library', []), our_entry.get('code dependencies', []), 'engine/library', 'notus')
p += compare_sets(lgw_entry.get('engine', []), our_entry.get('code dependencies', []),
'code dependencies', 'notthem')
p += compare_sets(lgw_entry.get('library', []), our_entry.get('code dependencies', []),
'code dependencies', 'notthem')
p += compare_sets(lgw_entry.get('engine', []) + lgw_entry.get('library', []),
our_entry.get('code dependencies', []), 'engine/library', 'notus')
# assets license
key = 'assets license'
@ -204,7 +224,7 @@ if __name__ == "__main__":
print('warning: file {} already existing, save under slightly different name'.format(file_name))
target_file = os.path.join(constants.entries_path, file_name[:-3] + '-duplicate.md')
if os.path.isfile(target_file):
continue # just for safety reasons
continue # just for safety reasons
# add name
entry = '# {}\n\n'.format(lgw_name)
@ -276,4 +296,4 @@ if __name__ == "__main__":
# finally write to file
utils.write_text(target_file, entry)
newly_created_entries += 1
newly_created_entries += 1

View File

@ -17,11 +17,15 @@ def local_module(module_base, file_path, module):
pathB = os.path.join(file_path, *module)
return os.path.exists(pathA) or os.path.exists(pathB)
if __name__ == "__main__":
system_libraries = {'__builtin__', '.', '..', '*', 'argparse', 'array', 'os', 'copy', 'codecs', 'collections', 'ctypes', 'pickle', 'cPickle', 'datetime', 'decimal', 'email', 'functools',
'io', 'itertools', 'json', 'httplib', 'glob', 'math', 'cmath', 'heapq', 'md5', 'operator', 'random', 're', 'sha', 'shutil', 'smtplib', 'socket', 'string', 'struct', 'subprocess',
'sys', 'thread', 'threading', 'time', 'traceback', 'types', 'urllib', 'urllib2', 'urlparse', 'unittest', 'yaml', 'yaml3', 'zlib', 'zipfile', '__future__'}
system_libraries = {'__builtin__', '.', '..', '*', 'argparse', 'array', 'os', 'copy', 'codecs', 'collections',
'ctypes', 'pickle', 'cPickle', 'datetime', 'decimal', 'email', 'functools',
'io', 'itertools', 'json', 'httplib', 'glob', 'math', 'cmath', 'heapq', 'md5', 'operator',
'random', 're', 'sha', 'shutil', 'smtplib', 'socket', 'string', 'struct', 'subprocess',
'sys', 'thread', 'threading', 'time', 'traceback', 'types', 'urllib', 'urllib2', 'urlparse',
'unittest', 'yaml', 'yaml3', 'zlib', 'zipfile', '__future__'}
regex_import = re.compile(r"^\s*import (.*)", re.MULTILINE)
regex_from = re.compile(r"^\s*from (.*) import (.*)", re.MULTILINE)
regex_comment = re.compile(r"(#.*)$", re.MULTILINE)
@ -66,7 +70,7 @@ if __name__ == "__main__":
matches = regex_import.findall(content)
for match in matches:
modules = match.split(',') # split if more
modules = match.split(',') # split if more
for module in modules:
module = module.strip()
if not local_module(module_base, file_path, module):
@ -76,7 +80,7 @@ if __name__ == "__main__":
matches = regex_from.findall(content)
for match in matches:
module = match[0] # only the from part
module = match[0] # only the from part
module = module.strip()
if not local_module(module_base, file_path, module):
imports.append(module)

View File

@ -118,7 +118,8 @@ def create_toc(title, file, entries):
# assemble rows
rows = []
for entry in entries:
rows.append('- **[{}]({})** ({})'.format(entry['name'], '../' + entry['file'], ', '.join(entry['code language'] + entry['code license'] + entry['state'])))
rows.append('- **[{}]({})** ({})'.format(entry['name'], '../' + entry['file'], ', '.join(
entry['code language'] + entry['code license'] + entry['state'])))
# sort rows (by title)
rows.sort(key=str.casefold)
@ -148,37 +149,38 @@ def check_validity_external_links():
number_checked_links = 0
# ignore the following urls (they give false positives here)
ignored_urls = ('https://git.tukaani.org/xz.git')
ignored_urls = ('https://git.tukaani.org/xz.git',)
# iterate over all entries
for _, entry_path, content in osg.entry_iterator():
# apply regex
matches = regex.findall(content)
# apply regex
matches = regex.findall(content)
# for each match
for match in matches:
# for each match
for match in matches:
# for each possible clause
for url in match:
# for each possible clause
for url in match:
# if there was something (and not a sourceforge git url)
if url and not url.startswith('https://git.code.sf.net/p/') and url not in ignored_urls:
try:
# without a special header, frequent 403 responses occur
req = urllib.request.Request(url, headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64)'})
urllib.request.urlopen(req)
except urllib.error.HTTPError as e:
print("{}: {} - {}".format(os.path.basename(entry_path), url, e.code))
except urllib.error.URLError as e:
print("{}: {} - {}".format(os.path.basename(entry_path), url, e.reason))
except http.client.RemoteDisconnected:
print("{}: {} - disconnected without response".format(os.path.basename(entry_path), url))
# if there was something (and not a sourceforge git url)
if url and not url.startswith('https://git.code.sf.net/p/') and url not in ignored_urls:
try:
# without a special header, frequent 403 responses occur
req = urllib.request.Request(url,
headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64)'})
urllib.request.urlopen(req)
except urllib.error.HTTPError as e:
print("{}: {} - {}".format(os.path.basename(entry_path), url, e.code))
except urllib.error.URLError as e:
print("{}: {} - {}".format(os.path.basename(entry_path), url, e.reason))
except http.client.RemoteDisconnected:
print("{}: {} - disconnected without response".format(os.path.basename(entry_path), url))
number_checked_links += 1
number_checked_links += 1
if number_checked_links % 50 == 0:
print("{} links checked".format(number_checked_links))
if number_checked_links % 50 == 0:
print("{} links checked".format(number_checked_links))
print("{} links checked".format(number_checked_links))
@ -354,9 +356,10 @@ def update_statistics(infos):
# total number
number_entries = len(infos)
rel = lambda x: x / number_entries * 100 # conversion to percent
rel = lambda x: x / number_entries * 100 # conversion to percent
statistics += 'analyzed {} entries on {}\n\n'.format(number_entries, datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'))
statistics += 'analyzed {} entries on {}\n\n'.format(number_entries,
datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'))
# State (beta, mature, inactive)
statistics += '## State\n\n'
@ -364,12 +367,14 @@ def update_statistics(infos):
number_state_beta = sum(1 for x in infos if 'beta' in x['state'])
number_state_mature = sum(1 for x in infos if 'mature' in x['state'])
number_inactive = sum(1 for x in infos if 'inactive' in x)
statistics += '- mature: {} ({:.1f}%)\n- beta: {} ({:.1f}%)\n- inactive: {} ({:.1f}%)\n\n'.format(number_state_mature, rel(number_state_mature), number_state_beta, rel(number_state_beta), number_inactive, rel(number_inactive))
statistics += '- mature: {} ({:.1f}%)\n- beta: {} ({:.1f}%)\n- inactive: {} ({:.1f}%)\n\n'.format(
number_state_mature, rel(number_state_mature), number_state_beta, rel(number_state_beta), number_inactive,
rel(number_inactive))
if number_inactive > 0:
entries_inactive = [(x['name'], x['inactive']) for x in infos if 'inactive' in x]
entries_inactive.sort(key=lambda x: str.casefold(x[0])) # first sort by name
entries_inactive.sort(key=lambda x: x[1], reverse=True) # then sort by inactive year (more recently first)
entries_inactive.sort(key=lambda x: x[1], reverse=True) # then sort by inactive year (more recently first)
entries_inactive = ['{} ({})'.format(*x) for x in entries_inactive]
statistics += '##### Inactive State\n\n' + ', '.join(entries_inactive) + '\n\n'
@ -394,9 +399,9 @@ def update_statistics(infos):
unique_languages = set(languages)
unique_languages = [(l, languages.count(l) / len(languages)) for l in unique_languages]
unique_languages.sort(key=lambda x: str.casefold(x[0])) # first sort by name
unique_languages.sort(key=lambda x: x[1], reverse=True) # then sort by occurrence (highest occurrence first)
unique_languages = ['- {} ({:.1f}%)\n'.format(x[0], x[1]*100) for x in unique_languages]
unique_languages.sort(key=lambda x: str.casefold(x[0])) # first sort by name
unique_languages.sort(key=lambda x: x[1], reverse=True) # then sort by occurrence (highest occurrence first)
unique_languages = ['- {} ({:.1f}%)\n'.format(x[0], x[1] * 100) for x in unique_languages]
statistics += '##### Language frequency\n\n' + ''.join(unique_languages) + '\n'
# Licenses
@ -419,9 +424,9 @@ def update_statistics(infos):
unique_licenses = set(licenses)
unique_licenses = [(l, licenses.count(l) / len(licenses)) for l in unique_licenses]
unique_licenses.sort(key=lambda x: str.casefold(x[0])) # first sort by name
unique_licenses.sort(key=lambda x: -x[1]) # then sort by occurrence (highest occurrence first)
unique_licenses = ['- {} ({:.1f}%)\n'.format(x[0], x[1]*100) for x in unique_licenses]
unique_licenses.sort(key=lambda x: str.casefold(x[0])) # first sort by name
unique_licenses.sort(key=lambda x: -x[1]) # then sort by occurrence (highest occurrence first)
unique_licenses = ['- {} ({:.1f}%)\n'.format(x[0], x[1] * 100) for x in unique_licenses]
statistics += '##### Licenses frequency\n\n' + ''.join(unique_licenses) + '\n'
# Keywords
@ -440,9 +445,9 @@ def update_statistics(infos):
unique_keywords = set(keywords)
unique_keywords = [(l, keywords.count(l) / len(keywords)) for l in unique_keywords]
unique_keywords.sort(key=lambda x: str.casefold(x[0])) # first sort by name
unique_keywords.sort(key=lambda x: -x[1]) # then sort by occurrence (highest occurrence first)
unique_keywords = ['- {} ({:.1f}%)'.format(x[0], x[1]*100) for x in unique_keywords]
unique_keywords.sort(key=lambda x: str.casefold(x[0])) # first sort by name
unique_keywords.sort(key=lambda x: -x[1]) # then sort by occurrence (highest occurrence first)
unique_keywords = ['- {} ({:.1f}%)'.format(x[0], x[1] * 100) for x in unique_keywords]
statistics += '##### Keywords frequency\n\n' + '\n'.join(unique_keywords) + '\n\n'
# no download or play field
@ -453,7 +458,7 @@ def update_statistics(infos):
if 'download' not in info and 'play' not in info:
entries.append(info['name'])
entries.sort(key=str.casefold)
statistics += '{}: '.format(len(entries)) + ', '.join(entries) + '\n\n'
statistics += '{}: '.format(len(entries)) + ', '.join(entries) + '\n\n'
# code hosted not on github, gitlab, bitbucket, launchpad, sourceforge
popular_code_repositories = ('github.com', 'gitlab.com', 'bitbucket.org', 'code.sf.net', 'code.launchpad.net')
@ -487,13 +492,15 @@ def update_statistics(infos):
if field in info:
code_dependencies.extend(info[field])
entries_with_code_dependency += 1
statistics += 'With code dependency field {} ({:.1f}%)\n\n'.format(entries_with_code_dependency, rel(entries_with_code_dependency))
statistics += 'With code dependency field {} ({:.1f}%)\n\n'.format(entries_with_code_dependency,
rel(entries_with_code_dependency))
unique_code_dependencies = set(code_dependencies)
unique_code_dependencies = [(l, code_dependencies.count(l) / len(code_dependencies)) for l in unique_code_dependencies]
unique_code_dependencies.sort(key=lambda x: str.casefold(x[0])) # first sort by name
unique_code_dependencies.sort(key=lambda x: -x[1]) # then sort by occurrence (highest occurrence first)
unique_code_dependencies = ['- {} ({:.1f}%)'.format(x[0], x[1]*100) for x in unique_code_dependencies]
unique_code_dependencies = [(l, code_dependencies.count(l) / len(code_dependencies)) for l in
unique_code_dependencies]
unique_code_dependencies.sort(key=lambda x: str.casefold(x[0])) # first sort by name
unique_code_dependencies.sort(key=lambda x: -x[1]) # then sort by occurrence (highest occurrence first)
unique_code_dependencies = ['- {} ({:.1f}%)'.format(x[0], x[1] * 100) for x in unique_code_dependencies]
statistics += '##### Code dependencies frequency\n\n' + '\n'.join(unique_code_dependencies) + '\n\n'
# Build systems:
@ -510,10 +517,11 @@ def update_statistics(infos):
unique_build_systems = set(build_systems)
unique_build_systems = [(l, build_systems.count(l) / len(build_systems)) for l in unique_build_systems]
unique_build_systems.sort(key=lambda x: str.casefold(x[0])) # first sort by name
unique_build_systems.sort(key=lambda x: -x[1]) # then sort by occurrence (highest occurrence first)
unique_build_systems = ['- {} ({:.1f}%)'.format(x[0], x[1]*100) for x in unique_build_systems]
statistics += '##### Build systems frequency ({})\n\n'.format(len(build_systems)) + '\n'.join(unique_build_systems) + '\n\n'
unique_build_systems.sort(key=lambda x: str.casefold(x[0])) # first sort by name
unique_build_systems.sort(key=lambda x: -x[1]) # then sort by occurrence (highest occurrence first)
unique_build_systems = ['- {} ({:.1f}%)'.format(x[0], x[1] * 100) for x in unique_build_systems]
statistics += '##### Build systems frequency ({})\n\n'.format(len(build_systems)) + '\n'.join(
unique_build_systems) + '\n\n'
# C, C++ projects without build system information
c_cpp_project_without_build_system = []
@ -521,15 +529,18 @@ def update_statistics(infos):
if field not in info and ('C' in info['code language'] or 'C++' in info['code language']):
c_cpp_project_without_build_system.append(info['name'])
c_cpp_project_without_build_system.sort(key=str.casefold)
statistics += '##### C and C++ projects without build system information ({})\n\n'.format(len(c_cpp_project_without_build_system)) + ', '.join(c_cpp_project_without_build_system) + '\n\n'
statistics += '##### C and C++ projects without build system information ({})\n\n'.format(
len(c_cpp_project_without_build_system)) + ', '.join(c_cpp_project_without_build_system) + '\n\n'
# C, C++ projects with build system information but without CMake as build system
c_cpp_project_not_cmake = []
for info in infos:
if field in info and 'CMake' in info[field] and ('C' in info['code language'] or 'C++' in info['code language']):
if field in info and 'CMake' in info[field] and (
'C' in info['code language'] or 'C++' in info['code language']):
c_cpp_project_not_cmake.append(info['name'])
c_cpp_project_not_cmake.sort(key=str.casefold)
statistics += '##### C and C++ projects with a build system different from CMake ({})\n\n'.format(len(c_cpp_project_not_cmake)) + ', '.join(c_cpp_project_not_cmake) + '\n\n'
statistics += '##### C and C++ projects with a build system different from CMake ({})\n\n'.format(
len(c_cpp_project_not_cmake)) + ', '.join(c_cpp_project_not_cmake) + '\n\n'
# Platform
statistics += '## Platform\n\n'
@ -545,9 +556,9 @@ def update_statistics(infos):
unique_platforms = set(platforms)
unique_platforms = [(l, platforms.count(l) / len(platforms)) for l in unique_platforms]
unique_platforms.sort(key=lambda x: str.casefold(x[0])) # first sort by name
unique_platforms.sort(key=lambda x: -x[1]) # then sort by occurrence (highest occurrence first)
unique_platforms = ['- {} ({:.1f}%)'.format(x[0], x[1]*100) for x in unique_platforms]
unique_platforms.sort(key=lambda x: str.casefold(x[0])) # first sort by name
unique_platforms.sort(key=lambda x: -x[1]) # then sort by occurrence (highest occurrence first)
unique_platforms = ['- {} ({:.1f}%)'.format(x[0], x[1] * 100) for x in unique_platforms]
statistics += '##### Platforms frequency\n\n' + '\n'.join(unique_platforms) + '\n\n'
# write to statistics file
@ -570,8 +581,9 @@ def export_json(infos):
# game & description
entry = ['{} (<a href="{}">home</a>, <a href="{}">entry</a>)'.format(info['name'], info['home'][0],
r'https://github.com/Trilarion/opensourcegames/blob/master/entries/' + info['file']),
textwrap.shorten(info['description'], width=60, placeholder='..')]
r'https://github.com/Trilarion/opensourcegames/blob/master/entries/' +
info['file']),
textwrap.shorten(info['description'], width=60, placeholder='..')]
# download
field = 'download'
@ -581,7 +593,8 @@ def export_json(infos):
entry.append('')
# state (field state is essential)
entry.append('{} / {}'.format(info['state'][0], 'inactive since {}'.format(info['inactive']) if 'inactive' in info else 'active'))
entry.append('{} / {}'.format(info['state'][0],
'inactive since {}'.format(info['inactive']) if 'inactive' in info else 'active'))
# keywords
field = 'keywords'
@ -627,7 +640,8 @@ def git_repo(repo):
return repo
# for all others we just check if they start with the typical urls of git services
services = ['https://git.tuxfamily.org/', 'http://git.pond.sub.org/', 'https://gitorious.org/', 'https://git.code.sf.net/p/']
services = ['https://git.tuxfamily.org/', 'http://git.pond.sub.org/', 'https://gitorious.org/',
'https://git.code.sf.net/p/']
for service in services:
if repo.startswith(service):
return repo
@ -649,7 +663,7 @@ def svn_repo(repo):
if repo.startswith('http://svn.uktrainsim.com/svn/'):
return repo
if repo is 'https://rpg.hamsterrepublic.com/source/wip':
if repo == 'https://rpg.hamsterrepublic.com/source/wip':
return repo
if repo.startswith('http://svn.savannah.gnu.org/svn/'):
@ -660,7 +674,7 @@ def svn_repo(repo):
if repo.startswith('https://svn.icculus.org/') or repo.startswith('http://svn.icculus.org/'):
return repo
# not svn
return None
@ -720,7 +734,7 @@ def export_primary_code_repositories_json(infos):
url = hg_repo(repo)
if url:
primary_repos['hg'].append(url)
consumed=True
consumed = True
continue
if not consumed:
@ -736,7 +750,10 @@ def export_primary_code_repositories_json(infos):
# statistics of gits
git_repos = primary_repos['git']
print('{} Git repositories'.format(len(git_repos)))
for domain in ('repo.or.cz', 'anongit.kde.org', 'bitbucket.org', 'git.code.sf.net', 'git.savannah', 'git.tuxfamily', 'github.com', 'gitlab.com', 'gitlab.com/osgames', 'gitlab.gnome.org'):
for domain in (
'repo.or.cz', 'anongit.kde.org', 'bitbucket.org', 'git.code.sf.net', 'git.savannah', 'git.tuxfamily',
'github.com',
'gitlab.com', 'gitlab.com/osgames', 'gitlab.gnome.org'):
print('{} on {}'.format(sum(1 if domain in x else 0 for x in git_repos), domain))
# write them to code/git
@ -787,7 +804,6 @@ def sort_text_file(file, name):
def clean_backlog(stripped_game_urls):
# read backlog and split
file = os.path.join(c.root_path, 'code', 'backlog.txt')
text = utils.read_text(file)
@ -915,7 +931,8 @@ def check_code_dependencies(infos):
dependencies[dependency] = 1
# delete those that are in names
dependencies = [(k, v) for k,v in dependencies.items() if k not in names and k not in osg.code_dependencies_without_entry]
dependencies = [(k, v) for k, v in dependencies.items() if
k not in names and k not in osg.code_dependencies_without_entry]
# sort by number
dependencies.sort(key=lambda x: x[1], reverse=True)

View File

@ -15,14 +15,16 @@ def developer_info_lookup(name):
return dev
return None
# author names in SF that aren't the author names how we have them
SF_alias_list = {'Erik Johansson (aka feneur)': 'Erik Johansson', 'Itms': 'Nicolas Auvray', 'Wraitii': 'Lancelot de Ferrière', 'Simzer': 'Simon Laszlo', 'armin bajramovic': 'Armin Bajramovic'}
SF_alias_list = {'Erik Johansson (aka feneur)': 'Erik Johansson', 'Itms': 'Nicolas Auvray',
'Wraitii': 'Lancelot de Ferrière', 'Simzer': 'Simon Laszlo', 'armin bajramovic': 'Armin Bajramovic'}
if __name__ == "__main__":
# read developer info
developer_info = osg.read_developer_info()
osg.write_developer_info(developer_info) # write again just to make nice
osg.write_developer_info(developer_info) # write again just to make nice
# assemble info
entries = osg.assemble_infos()
@ -34,12 +36,12 @@ if __name__ == "__main__":
developers = ''
try:
i = 0
#active = False
# active = False
for entry in entries:
#if entry['name'] == 'Aleph One':
# if entry['name'] == 'Aleph One':
# active = True
#if not active:
# if not active:
# continue
# for testing purposes
@ -72,7 +74,7 @@ if __name__ == "__main__":
# sometimes author already contains the full url, sometimes not
url = 'https://sourceforge.net' + author if not author.startswith('http') else author
response = requests.get(url)
url = response.url # could be different now
url = response.url # could be different now
if 'auth/?return_to' in url:
# for some reason authorisation is forbidden
author_name = author
@ -80,8 +82,8 @@ if __name__ == "__main__":
else:
soup = BeautifulSoup(response.text, 'html.parser')
author_name = soup.h1.get_text()
author_name = SF_alias_list.get(author_name, author_name) # replace by alias if possible
nickname = soup.find('dl', class_= 'personal-data').find('dd').get_text()
author_name = SF_alias_list.get(author_name, author_name) # replace by alias if possible
nickname = soup.find('dl', class_='personal-data').find('dd').get_text()
nickname = nickname.replace('\n', '').strip()
dev = developer_info_lookup(author_name)
in_devs = dev and 'contact' in dev and nickname + '@SF' in dev['contact']
@ -123,10 +125,9 @@ if __name__ == "__main__":
if content:
developers += '{}\n\n{}\n'.format(entry_name, content)
except RuntimeError as e:
raise(e)
raise e
# pass
finally:
# store developer info
utils.write_text(os.path.join(c.root_path, 'collected_developer_info.txt'), developers)
utils.write_text(os.path.join(c.root_path, 'collected_developer_info.txt'), developers)

View File

@ -6,4 +6,4 @@ if __name__ == "__main__":
osg.write_inspirations_info(inspirations) # write again just to check integrity
# assemble info
entries = osg.assemble_infos()
entries = osg.assemble_infos()

View File

@ -36,25 +36,50 @@ video: not used
TODO also ignore our rejected entries
"""
import ruamel_yaml as yaml
import ruamel.yaml as yaml
import os
from utils import constants, utils, osg
# should change on osgameclones
osgc_name_aliases = {'4DTris': '4D-TRIS', 'fheroes2': 'Free Heroes 2', 'DrCreep': 'The Castles of Dr. Creep', 'Duke3d_win32': 'Duke3d_w32', 'erampage (EDuke32 fork)': 'erampage', 'GNOME Atomix': 'Atomix', 'Head over Heels 2': 'Head over Heels',
'mewl': 'M.E.W.L.', 'LinWarrior': 'Linwarrior 3D', 'Mice Men Remix': 'Mice Men: Remix', 'OpenApoc': 'Open Apocalypse', 'open-cube': 'Open Cube', 'open-horizon': 'Open Horizon', 'opengl_test_drive_clone': 'OpenGL Test Drive Remake',
'Play Freeciv!': 'Freeciv-web', 'ProjectX': 'Forsaken', 'Siege of Avalon Open Source': 'Siege of Avalon : Open Source', 'ss13remake': 'SS13 Remake', 'shadowgrounds': 'Shadowgrounds', 'RxWars': 'Prescription Wars', 'Super Mario Bros And Level Editor in C#': 'Mario Objects',
'tetris': 'Just another Tetris™ clone', 'twin-e': 'TwinEngine', 'CrossUO: Ultima Online': 'CrossUO', 'Doomsday': 'Doomsday Engine', 'OpMon': 'OPMon'}
osgc_name_aliases = {'4DTris': '4D-TRIS', 'fheroes2': 'Free Heroes 2', 'DrCreep': 'The Castles of Dr. Creep',
'Duke3d_win32': 'Duke3d_w32', 'erampage (EDuke32 fork)': 'erampage', 'GNOME Atomix': 'Atomix',
'Head over Heels 2': 'Head over Heels',
'mewl': 'M.E.W.L.', 'LinWarrior': 'Linwarrior 3D', 'Mice Men Remix': 'Mice Men: Remix',
'OpenApoc': 'Open Apocalypse', 'open-cube': 'Open Cube', 'open-horizon': 'Open Horizon',
'opengl_test_drive_clone': 'OpenGL Test Drive Remake',
'Play Freeciv!': 'Freeciv-web', 'ProjectX': 'Forsaken',
'Siege of Avalon Open Source': 'Siege of Avalon : Open Source', 'ss13remake': 'SS13 Remake',
'shadowgrounds': 'Shadowgrounds', 'RxWars': 'Prescription Wars',
'Super Mario Bros And Level Editor in C#': 'Mario Objects',
'tetris': 'Just another Tetris™ clone', 'twin-e': 'TwinEngine',
'CrossUO: Ultima Online': 'CrossUO', 'Doomsday': 'Doomsday Engine', 'OpMon': 'OPMon'}
# conversion between licenses syntax them and us
osgc_licenses_map = {'GPL2': 'GPL-2.0', 'GPL3': 'GPL-3.0', 'AGPL3': 'AGPL-3.0', 'LGPL3': 'LGPL-3.0', 'LGPL2': 'LGPL-2.0 or 2.1?', 'MPL': 'MPL-2.0', 'Apache': 'Apache-2.0', 'Artistic': 'Artistic License', 'Zlib': 'zlib', 'PD': 'Public domain', 'AFL3': 'AFL-3.0', 'BSD2': '2-clause BSD'}
osgc_licenses_map = {'GPL2': 'GPL-2.0', 'GPL3': 'GPL-3.0', 'AGPL3': 'AGPL-3.0', 'LGPL3': 'LGPL-3.0',
'LGPL2': 'LGPL-2.0 or 2.1?', 'MPL': 'MPL-2.0', 'Apache': 'Apache-2.0',
'Artistic': 'Artistic License', 'Zlib': 'zlib', 'PD': 'Public domain', 'AFL3': 'AFL-3.0',
'BSD2': '2-clause BSD'}
# ignore osgc entries (for various reasons like unclear license etc.)
osgc_ignored_entries = ["A Mouse's Vengeance", 'achtungkurve.com', 'AdaDoom3', 'Agendaroids', 'Alien 8', 'Ard-Reil', 'Balloon Fight', 'bladerunner (Engine within SCUMMVM)', 'Block Shooter', 'Bomb Mania Reloaded', 'boulder-dash', 'Cannon Fodder', 'Contra_remake', 'CosmicArk-Advanced', 'Deuteros X', 'datastorm'
, 'div-columns', 'div-pacman2600', 'div-pitfall', 'div-spaceinvaders2600', 'EXILE', 'Free in the Dark', 'Football Manager', 'Fight Or Perish', 'EarthShakerDS', 'Entombed!', 'FreeRails 2', 'Glest Advanced Engine', 'FreedroidClassic', 'FreeFT', 'Future Blocks', 'HeadOverHeels'
, 'Herzog 3D', 'Homeworld SDL', 'imperialism-remake', 'Jumping Jack 2: Worryingly Familiar', 'Jumping Jack: Further Adventures', 'Jumpman', 'legion', 'KZap', 'LastNinja', 'Lemmix', 'LixD', 'luminesk5', 'Manic Miner', 'Meridian 59 Server 105', 'Meridian 59 German Server 112', 'Mining Haze'
, 'OpenGeneral', 'MonoStrategy', 'New RAW', 'OpenDeathValley', 'OpenOutcast', 'openStrato', 'OpenPop', 'pacman', 'Phavon', 'PKMN-FX', 'Project: Xenocide', 'pyspaceinvaders', 'PyTouhou', 'Racer', 'Ruby OMF 2097 Remake', 'Snipes', 'Spaceship Duel', 'Space Station 14', 'Starlane Empire'
, 'Styx', 'Super Mario Bros With SFML in C#', 'thromolusng', 'Tile World 2', 'Tranzam', 'Voxelstein 3D', 'XQuest 2', 'xrick', 'zedragon', 'Uncharted waters 2 remake', 'Desktop Adventures Engine for ScummVM', 'Open Sonic', 'Aladdin_DirectX', 'Alive_Reversing']
osgc_ignored_entries = ["A Mouse's Vengeance", 'achtungkurve.com', 'AdaDoom3', 'Agendaroids', 'Alien 8', 'Ard-Reil',
'Balloon Fight', 'bladerunner (Engine within SCUMMVM)', 'Block Shooter', 'Bomb Mania Reloaded',
'boulder-dash', 'Cannon Fodder', 'Contra_remake', 'CosmicArk-Advanced', 'Deuteros X',
'datastorm', 'div-columns', 'div-pacman2600', 'div-pitfall', 'div-spaceinvaders2600', 'EXILE',
'Free in the Dark',
'Football Manager', 'Fight Or Perish', 'EarthShakerDS', 'Entombed!', 'FreeRails 2',
'Glest Advanced Engine', 'FreedroidClassic', 'FreeFT', 'Future Blocks', 'HeadOverHeels'
, 'Herzog 3D', 'Homeworld SDL', 'imperialism-remake', 'Jumping Jack 2: Worryingly Familiar',
'Jumping Jack: Further Adventures', 'Jumpman', 'legion', 'KZap', 'LastNinja', 'Lemmix', 'LixD',
'luminesk5', 'Manic Miner', 'Meridian 59 Server 105', 'Meridian 59 German Server 112',
'Mining Haze', 'OpenGeneral', 'MonoStrategy', 'New RAW', 'OpenDeathValley', 'OpenOutcast',
'openStrato', 'OpenPop', 'pacman',
'Phavon', 'PKMN-FX', 'Project: Xenocide', 'pyspaceinvaders', 'PyTouhou', 'Racer',
'Ruby OMF 2097 Remake', 'Snipes', 'Spaceship Duel', 'Space Station 14', 'Starlane Empire',
'Styx', 'Super Mario Bros With SFML in C#', 'thromolusng', 'Tile World 2', 'Tranzam',
'Voxelstein 3D', 'XQuest 2',
'xrick', 'zedragon', 'Uncharted waters 2 remake', 'Desktop Adventures Engine for ScummVM',
'Open Sonic', 'Aladdin_DirectX', 'Alive_Reversing']
def unique_field_contents(entries, field):
"""
@ -74,6 +99,7 @@ def unique_field_contents(entries, field):
def compare_sets(a, b, name, limit=None):
"""
:param limit:
:param a:
:param b:
:param name:
@ -100,7 +126,7 @@ if __name__ == "__main__":
maximal_newly_created_entries = 40
# paths
root_path = os.path.realpath(os.path.join(os.path.dirname(__file__), os.path.pardir))
root_path = os.path.realpath(os.path.join(os.path.dirname(__file__), os.path.pardir))
# import the osgameclones data
osgc_path = os.path.realpath(os.path.join(root_path, os.path.pardir, '11_osgameclones.git', 'games'))
@ -155,15 +181,15 @@ if __name__ == "__main__":
print('{}: {}'.format(field, ', '.join(statistics)))
# eliminate the ignored entries
_ = [x['name'] for x in osgc_entries if x['name'] in osgc_ignored_entries] # those that will be ignored
_ = set(osgc_ignored_entries) - set(_) # those that shall be ignored minus those that will be ignored
_ = [x['name'] for x in osgc_entries if x['name'] in osgc_ignored_entries] # those that will be ignored
_ = set(osgc_ignored_entries) - set(_) # those that shall be ignored minus those that will be ignored
if _:
print('Can un-ignore {}'.format(_))
osgc_entries = [x for x in osgc_entries if x['name'] not in osgc_ignored_entries]
# fix names and licenses (so they are not longer detected as deviations downstreams)
_ = [x['name'] for x in osgc_entries if x['name'] in osgc_name_aliases.keys()] # those that will be renamed
_ = set(osgc_name_aliases.keys()) - set(_) # those that shall be renamed minus those that will be renamed
_ = [x['name'] for x in osgc_entries if x['name'] in osgc_name_aliases.keys()] # those that will be renamed
_ = set(osgc_name_aliases.keys()) - set(_) # those that shall be renamed minus those that will be renamed
if _:
print('Can un-rename {}'.format(_))
for index, entry in enumerate(osgc_entries):
@ -181,7 +207,7 @@ if __name__ == "__main__":
osgc_content = [osgc_content]
osgc_content = [x + ' content' for x in osgc_content]
entry['content'] = osgc_content
osgc_entries[index] = entry # TODO is this necessary or is the entry modified anyway?
osgc_entries[index] = entry # TODO is this necessary or is the entry modified anyway?
# which fields do they have
osgc_fields = set()
@ -215,11 +241,12 @@ if __name__ == "__main__":
common_names = osgc_names & our_names
osgc_names -= common_names
our_names -= common_names
print('{} in both, {} only in osgameclones, {} only with us'.format(len(common_names), len(osgc_names), len(our_names)))
print('{} in both, {} only in osgameclones, {} only with us'.format(len(common_names), len(osgc_names),
len(our_names)))
# find similar names among the rest
#print('look for similar names')
#for osgc_name in osgc_names:
# print('look for similar names')
# for osgc_name in osgc_names:
# for our_name in our_names:
# if osg.game_name_similarity(osgc_name, our_name) > similarity_threshold:
# print(' {} - {}'.format(osgc_name, our_name))
@ -246,13 +273,13 @@ if __name__ == "__main__":
osgc_languages = osgc_entry['lang']
if type(osgc_languages) == str:
osgc_languages = [osgc_languages]
our_languages = our_entry['code language'] # essential field
our_languages = our_entry['code language'] # essential field
p += compare_sets(osgc_languages, our_languages, 'code language')
# compare their license with our code and assets license
if 'license' in osgc_entry:
osgc_licenses = osgc_entry['license']
our_code_licenses = our_entry['code license'] # essential field
our_code_licenses = our_entry['code license'] # essential field
our_assets_licenses = our_entry.get('assets license', [])
p += compare_sets(osgc_licenses, our_code_licenses + our_assets_licenses, 'licenses', 'notthem')
p += compare_sets(osgc_licenses, our_code_licenses, 'licenses', 'notus')
@ -265,7 +292,8 @@ if __name__ == "__main__":
osgc_frameworks = [osgc_frameworks]
our_frameworks = our_entry.get('code dependencies', [])
our_frameworks = [x.casefold() for x in our_frameworks]
our_frameworks = [x if x not in our_framework_replacements else our_framework_replacements[x] for x in our_frameworks]
our_frameworks = [x if x not in our_framework_replacements else our_framework_replacements[x] for x
in our_frameworks]
osgc_frameworks = [x.casefold() for x in osgc_frameworks]
p += compare_sets(osgc_frameworks, our_frameworks, 'framework/dependencies')
@ -275,16 +303,21 @@ if __name__ == "__main__":
if type(osgc_repos) == str:
osgc_repos = [osgc_repos]
osgc_repos = [utils.strip_url(url) for url in osgc_repos]
osgc_repos = [x for x in osgc_repos if not x.startswith('sourceforge.net/projects/')] # we don't need the general sites there
osgc_repos = [x for x in osgc_repos if not x.startswith(
'sourceforge.net/projects/')] # we don't need the general sites there
# osgc_repos = [x for x in osgc_repos if not x.startswith('https://sourceforge.net/projects/')] # ignore some
our_repos = our_entry.get('code repository', [])
our_repos = [utils.strip_url(url) for url in our_repos]
our_repos = [x for x in our_repos if not x.startswith('gitlab.com/osgames/')] # we do not yet spread our own deeds (but we will some day)
our_repos = [x for x in our_repos if not 'cvs.sourceforge.net' in x and not 'svn.code.sf.net/p/' in x] # no cvs or svn anymore
our_repos = [x for x in our_repos if not x.startswith(
'gitlab.com/osgames/')] # we do not yet spread our own deeds (but we will some day)
our_repos = [x for x in our_repos if
'cvs.sourceforge.net' not in x and 'svn.code.sf.net/p/' not in x] # no cvs or svn anymore
our_downloads = our_entry.get('download', [])
our_downloads = [utils.strip_url(url) for url in our_downloads]
p += compare_sets(osgc_repos, our_repos + our_downloads, 'repo', 'notthem') # if their repos are not in our downloads or repos
p += compare_sets(osgc_repos, our_repos[:1], 'repo', 'notus') # if our main repo is not in their repo
p += compare_sets(osgc_repos, our_repos + our_downloads, 'repo',
'notthem') # if their repos are not in our downloads or repos
p += compare_sets(osgc_repos, our_repos[:1], 'repo',
'notus') # if our main repo is not in their repo
# compare their url (and feed) to our home (and strip urls)
if 'url' in osgc_entry:
@ -294,14 +327,16 @@ if __name__ == "__main__":
osgc_urls = [utils.strip_url(url) for url in osgc_urls]
our_urls = our_entry['home']
our_urls = [utils.strip_url(url) for url in our_urls]
our_urls = [url for url in our_urls if not url.startswith('github.com/')] # they don't have them as url
p += compare_sets(osgc_urls, our_urls, 'url/home', 'notthem') # if their urls are not in our urls
p += compare_sets(osgc_urls, our_urls[:1], 'url/home', 'notus') # if our first url is not in their urls
our_urls = [url for url in our_urls if
not url.startswith('github.com/')] # they don't have them as url
p += compare_sets(osgc_urls, our_urls, 'url/home', 'notthem') # if their urls are not in our urls
p += compare_sets(osgc_urls, our_urls[:1], 'url/home',
'notus') # if our first url is not in their urls
# compare their status with our state (playable can be beta/mature with us, but not playable must be beta)
if 'status' in osgc_entry:
osgc_status = osgc_entry['status']
our_status = our_entry['state'] # essential field
our_status = our_entry['state'] # essential field
if osgc_status != 'playable' and 'mature' in our_status:
p += ' status : mismatch : them {}, us mature\n'.format(osgc_status)
@ -321,13 +356,14 @@ if __name__ == "__main__":
our_keywords = our_entry['keywords']
if 'originals' in osgc_entry:
osgc_originals = osgc_entry['originals']
osgc_originals = [x.replace(',', '') for x in osgc_originals] # we cannot have ',' or parts in parentheses in original names
osgc_originals = [x.replace(',', '') for x in
osgc_originals] # we cannot have ',' or parts in parentheses in original names
our_originals = [x for x in our_keywords if x.startswith('inspired by ')]
if our_originals:
assert len(our_originals) == 1, '{}: {}'.format(our_name, our_originals)
our_originals = our_originals[0][11:].split('+')
our_originals = [x.strip() for x in our_originals]
our_originals = [x for x in our_originals if x not in ['Doom II']] # ignore same
our_originals = [x for x in our_originals if x not in ['Doom II']] # ignore same
p += compare_sets(osgc_originals, our_originals, 'originals')
# compare their multiplayer with our keywords (multiplayer) (only lowercase comparison)
@ -336,9 +372,12 @@ if __name__ == "__main__":
if type(osgc_multiplayer) == str:
osgc_multiplayer = [osgc_multiplayer]
osgc_multiplayer = [x.casefold() for x in osgc_multiplayer]
osgc_multiplayer = [x for x in osgc_multiplayer if x not in ['competitive']] # ignored
osgc_multiplayer = [x for x in osgc_multiplayer if x not in ['competitive']] # ignored
our_multiplayer = [x for x in our_keywords if x.startswith('multiplayer ')]
if our_multiplayer:
if len(our_multiplayer) != 1:
print(our_entry)
raise RuntimeError()
assert len(our_multiplayer) == 1
our_multiplayer = our_multiplayer[0][11:].split('+')
our_multiplayer = [x.strip().casefold() for x in our_multiplayer]
@ -349,14 +388,16 @@ if __name__ == "__main__":
osgc_content = osgc_entry['content']
if isinstance(osgc_content, str):
osgc_content = [osgc_content]
p += compare_sets(osgc_content, our_keywords, 'content/keywords', 'notthem') # only to us because we have more then them
p += compare_sets(osgc_content, our_keywords, 'content/keywords',
'notthem') # only to us because we have more then them
# compare their type to our keywords
if 'type' in osgc_entry:
game_type = osgc_entry['type']
if isinstance(game_type, str):
game_type = [game_type]
p += compare_sets(game_type, our_keywords, 'type/keywords', 'notthem') # only to us because we have more then them
p += compare_sets(game_type, our_keywords, 'type/keywords',
'notthem') # only to us because we have more then them
if p:
print('{}\n{}'.format(name, p))
@ -387,7 +428,7 @@ if __name__ == "__main__":
print('warning: file {} already existing, save under slightly different name'.format(file_name))
target_file = os.path.join(constants.entries_path, file_name[:-3] + '-duplicate.md')
if os.path.isfile(target_file):
continue # just for safety reasons
continue # just for safety reasons
# add name
entry = '# {}\n\n'.format(osgc_name)
@ -487,7 +528,6 @@ if __name__ == "__main__":
if not is_included:
# that could be added to them
print('- [{}]({})'.format(our_name, 'https://github.com/Trilarion/opensourcegames/blob/master/entries/' + our_entry['file']))
print('- [{}]({})'.format(our_name,
'https://github.com/Trilarion/opensourcegames/blob/master/entries/' + our_entry[
'file']))

View File

@ -34,4 +34,4 @@ def git_folder_name(url):
'https://bitbucket.org': 'bitbucket',
'https://gitlab.gnome.org': 'gnome'
}
return derive_folder_name(url, replaces)
return derive_folder_name(url, replaces)

View File

@ -10,4 +10,4 @@ entries_path = os.path.join(root_path, 'entries')
tocs_path = os.path.join(entries_path, 'tocs')
code_path = os.path.join(root_path, 'code')
local_properties_file = os.path.join(root_path, 'local.properties')
local_properties_file = os.path.join(root_path, 'local.properties')

View File

@ -15,10 +15,10 @@ class ListingTransformer(lark.Transformer):
raise lark.Discard
def property(self, x):
return (x[0].value.lower(), x[1].value)
return x[0].value.lower(), x[1].value
def name(self, x):
return ('name', x[0].value)
return 'name', x[0].value
def entry(self, x):
d = {}
@ -32,8 +32,8 @@ class ListingTransformer(lark.Transformer):
def start(self, x):
return x
# transformer
# transformer
class EntryTransformer(lark.Transformer):
def start(self, x):
@ -43,22 +43,22 @@ class EntryTransformer(lark.Transformer):
return d
def title(self, x):
return ('title', x[0].value)
return 'title', x[0].value
def description(self, x):
return ('description', x[0].value)
return 'description', x[0].value
def property(self, x):
return (str.casefold(x[0].value), x[1].value)
return str.casefold(x[0].value), x[1].value
def note(self, x):
return ('note', x[0].value)
return 'note', x[0].value
def building(self, x):
d = {}
for key, value in x:
d[key] = value
return ('building', d)
return 'building', d
essential_fields = ('Home', 'State', 'Keywords', 'Code repository', 'Code language', 'Code license')
@ -223,7 +223,7 @@ def parse_entry(content):
v = [x for x in v if x]
# if entry is of structure <..> remove <>
v = [x[1:-1] if x[0] is '<' and x[-1] is '>' else x for x in v]
v = [x[1:-1] if x[0] == '<' and x[-1] == '>' else x for x in v]
# empty fields will not be stored
if not v:
@ -364,7 +364,7 @@ def extract_links():
"""
# regex for finding urls (can be in <> or in ]() or after a whitespace
regex = re.compile(r"[\s\n]<(http.+?)>|\]\((http.+?)\)|[\s\n](http[^\s\n,]+?)[\s\n,]")
regex = re.compile(r"[\s\n]<(http.+?)>|]\((http.+?)\)|[\s\n](http[^\s\n,]+?)[\s\n,]")
# iterate over all entries
urls = set()
@ -409,7 +409,7 @@ def read_developer_info():
developer_file = os.path.join(c.root_path, 'developer.md')
grammar_file = os.path.join(c.code_path, 'grammar_listing.lark')
transformer = ListingTransformer()
developers = read_and_parse(developer_file, grammar_file, transformer)
developers = read_and_parse(developer_file, grammar_file, transformer)
# now transform a bit more
for index, dev in enumerate(developers):
# check for valid keys
@ -430,7 +430,7 @@ def read_developer_info():
# check for duplicate names entries
names = [dev['name'] for dev in developers]
duplicate_names = (name for name in names if names.count(name) > 1)
duplicate_names = set(duplicate_names) # to avoid duplicates in duplicate_names
duplicate_names = set(duplicate_names) # to avoid duplicates in duplicate_names
if duplicate_names:
print('Warning: duplicate developer names: {}'.format(', '.join(duplicate_names)))
return developers
@ -474,7 +474,6 @@ def write_developer_info(developers):
utils.write_text(developer_file, content)
def read_inspirations_info():
"""
@ -491,7 +490,7 @@ def read_inspirations_info():
if field not in valid_inspiration_fields:
raise RuntimeError('Unknown field "{}" for inspiration: {}.'.format(field, inspiration['name']))
# split lists
for field in ('inspired entries', ):
for field in ('inspired entries',):
if field in inspiration:
content = inspiration[field]
content = content.split(',')
@ -500,7 +499,7 @@ def read_inspirations_info():
# check for duplicate names entries
names = [inspiration['name'] for inspiration in inspirations]
duplicate_names = (name for name in names if names.count(name) > 1)
duplicate_names = set(duplicate_names) # to avoid duplicates in duplicate_names
duplicate_names = set(duplicate_names) # to avoid duplicates in duplicate_names
if duplicate_names:
print('Warning: duplicate inspiration names: {}'.format(', '.join(duplicate_names)))
return inspirations
@ -527,7 +526,8 @@ def write_inspirations_info(inspirations):
content += '## {} ({})\n\n'.format(inspiration['name'], len(inspiration['inspired entries']))
# games
content += '- Inspired entries: {}\n'.format(', '.join(sorted(inspiration['inspired entries'], key=str.casefold)))
content += '- Inspired entries: {}\n'.format(
', '.join(sorted(inspiration['inspired entries'], key=str.casefold)))
# all the remaining in alphabetical order
for field in sorted(inspiration.keys()):
@ -545,7 +545,6 @@ def write_inspirations_info(inspirations):
utils.write_text(inspirations_file, content)
def compare_entries_developers(entries, developers):
"""
Cross checks the game entries lists and the developers lists.
@ -580,11 +579,11 @@ def compare_entries_developers(entries, developers):
games2 = set(devs2[dev])
delta = games1 - games2
if delta:
print('Warning: dev "{}" has games in entries ({}) that are not present in developers'.format(dev, ', '.join(delta)))
print('Warning: dev "{}" has games in entries ({}) that are not present in developers'.format(dev,
', '.join(
delta)))
delta = games2 - games1
if delta:
print('Warning: dev "{}" has games in developers ({}) that are not present in entries'.format(dev, ', '.join(delta)))
print('Warning: dev "{}" has games in developers ({}) that are not present in entries'.format(dev,
', '.join(
delta)))

View File

@ -289,7 +289,7 @@ def load_properties(filepath, sep='=', comment_char='#'):
line = line.strip()
if not line.startswith(comment_char):
line = line.split(sep)
assert(len(line)==2)
assert (len(line) == 2)
key = line[0].strip()
value = line[1].strip()
properties[key] = value
@ -309,4 +309,4 @@ def unique_elements_and_occurrences(elements):
unique_elements = list(unique_elements.items())
unique_elements.sort(key=lambda x: -x[1])
unique_elements = ['{}({})'.format(k, v) for k, v in unique_elements]
return unique_elements
return unique_elements

View File

@ -9429,7 +9429,7 @@
"Indie Turn Based Strategy in Isometric Pixel Art.",
"",
"mature / active",
"strategy, clone, inspired by Advance Wars, multiplayer hotseat, multiplayer online, open content",
"strategy, clone, inspired by Advance Wars, multiplayer hotseat + online, open content",
"<a href=\"https://github.com/w84death/Tanks-of-Freedom.git\">Source</a> - GDScript - MIT"
],
[

View File

@ -5,7 +5,7 @@ _Indie Turn Based Strategy in Isometric Pixel Art._
- Home: https://tof.p1x.in/, https://w84death.itch.io/tanks-of-freedom
- State: mature
- Download: (see home)
- Keywords: strategy, clone, inspired by Advance Wars, multiplayer hotseat, multiplayer online, open content
- Keywords: strategy, clone, inspired by Advance Wars, multiplayer hotseat + online, open content
- Code repository: https://github.com/w84death/Tanks-of-Freedom.git
- Code language: GDScript
- Code license: MIT